00:00:00.000 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 1000 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3667 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.219 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.220 The recommended git tool is: git 00:00:00.220 using credential 00000000-0000-0000-0000-000000000002 00:00:00.221 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.235 Fetching changes from the remote Git repository 00:00:00.237 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.251 Using shallow fetch with depth 1 00:00:00.251 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.251 > git --version # timeout=10 00:00:00.267 > git --version # 'git version 2.39.2' 00:00:00.267 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.278 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.278 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.178 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.190 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.202 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.202 > git config core.sparsecheckout # timeout=10 00:00:06.213 > git read-tree -mu HEAD # timeout=10 00:00:06.230 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.260 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.260 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.379 [Pipeline] Start of Pipeline 00:00:06.391 [Pipeline] library 00:00:06.393 Loading library shm_lib@master 00:00:06.393 Library shm_lib@master is cached. Copying from home. 00:00:06.405 [Pipeline] node 00:00:06.415 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:06.416 [Pipeline] { 00:00:06.425 [Pipeline] catchError 00:00:06.427 [Pipeline] { 00:00:06.438 [Pipeline] wrap 00:00:06.445 [Pipeline] { 00:00:06.453 [Pipeline] stage 00:00:06.455 [Pipeline] { (Prologue) 00:00:06.470 [Pipeline] echo 00:00:06.471 Node: VM-host-SM0 00:00:06.475 [Pipeline] cleanWs 00:00:06.485 [WS-CLEANUP] Deleting project workspace... 00:00:06.485 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.490 [WS-CLEANUP] done 00:00:06.670 [Pipeline] setCustomBuildProperty 00:00:06.737 [Pipeline] httpRequest 00:00:07.265 [Pipeline] echo 00:00:07.267 Sorcerer 10.211.164.101 is alive 00:00:07.277 [Pipeline] retry 00:00:07.279 [Pipeline] { 00:00:07.293 [Pipeline] httpRequest 00:00:07.297 HttpMethod: GET 00:00:07.298 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.298 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.317 Response Code: HTTP/1.1 200 OK 00:00:07.317 Success: Status code 200 is in the accepted range: 200,404 00:00:07.318 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:23.343 [Pipeline] } 00:00:23.360 [Pipeline] // retry 00:00:23.368 [Pipeline] sh 00:00:23.652 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:23.669 [Pipeline] httpRequest 00:00:24.082 [Pipeline] echo 00:00:24.084 Sorcerer 10.211.164.101 is alive 00:00:24.094 [Pipeline] retry 00:00:24.096 [Pipeline] { 00:00:24.112 [Pipeline] httpRequest 00:00:24.117 HttpMethod: GET 00:00:24.118 URL: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:24.119 Sending request to url: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:24.120 Response Code: HTTP/1.1 200 OK 00:00:24.120 Success: Status code 200 is in the accepted range: 200,404 00:00:24.121 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:40.624 [Pipeline] } 00:00:40.640 [Pipeline] // retry 00:00:40.649 [Pipeline] sh 00:00:40.934 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:43.479 [Pipeline] sh 00:00:43.759 + git -C spdk log --oneline -n5 00:00:43.760 c13c99a5e test: Various fixes for Fedora40 00:00:43.760 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:43.760 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:43.760 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:43.760 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:43.779 [Pipeline] withCredentials 00:00:43.790 > git --version # timeout=10 00:00:43.803 > git --version # 'git version 2.39.2' 00:00:43.818 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:43.820 [Pipeline] { 00:00:43.829 [Pipeline] retry 00:00:43.831 [Pipeline] { 00:00:43.846 [Pipeline] sh 00:00:44.126 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:44.705 [Pipeline] } 00:00:44.725 [Pipeline] // retry 00:00:44.731 [Pipeline] } 00:00:44.747 [Pipeline] // withCredentials 00:00:44.759 [Pipeline] httpRequest 00:00:45.200 [Pipeline] echo 00:00:45.203 Sorcerer 10.211.164.101 is alive 00:00:45.213 [Pipeline] retry 00:00:45.216 [Pipeline] { 00:00:45.231 [Pipeline] httpRequest 00:00:45.236 HttpMethod: GET 00:00:45.236 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:45.237 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:45.243 Response Code: HTTP/1.1 200 OK 00:00:45.243 Success: Status code 200 is in the accepted range: 200,404 00:00:45.244 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:15.623 [Pipeline] } 00:01:15.646 [Pipeline] // retry 00:01:15.655 [Pipeline] sh 00:01:15.942 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:17.332 [Pipeline] sh 00:01:17.616 + git -C dpdk log --oneline -n5 00:01:17.616 eeb0605f11 version: 23.11.0 00:01:17.617 238778122a doc: update release notes for 23.11 00:01:17.617 46aa6b3cfc doc: fix description of RSS features 00:01:17.617 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:17.617 7e421ae345 devtools: support skipping forbid rule check 00:01:17.630 [Pipeline] writeFile 00:01:17.640 [Pipeline] sh 00:01:17.915 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:17.927 [Pipeline] sh 00:01:18.209 + cat autorun-spdk.conf 00:01:18.209 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.209 SPDK_TEST_NVMF=1 00:01:18.209 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.209 SPDK_TEST_USDT=1 00:01:18.209 SPDK_RUN_UBSAN=1 00:01:18.209 SPDK_TEST_NVMF_MDNS=1 00:01:18.209 NET_TYPE=virt 00:01:18.209 SPDK_JSONRPC_GO_CLIENT=1 00:01:18.209 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:18.209 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:18.209 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.217 RUN_NIGHTLY=1 00:01:18.219 [Pipeline] } 00:01:18.236 [Pipeline] // stage 00:01:18.255 [Pipeline] stage 00:01:18.258 [Pipeline] { (Run VM) 00:01:18.273 [Pipeline] sh 00:01:18.557 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:18.557 + echo 'Start stage prepare_nvme.sh' 00:01:18.557 Start stage prepare_nvme.sh 00:01:18.557 + [[ -n 2 ]] 00:01:18.557 + disk_prefix=ex2 00:01:18.557 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:18.557 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:18.557 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:18.557 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.557 ++ SPDK_TEST_NVMF=1 00:01:18.557 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.557 ++ SPDK_TEST_USDT=1 00:01:18.557 ++ SPDK_RUN_UBSAN=1 00:01:18.557 ++ SPDK_TEST_NVMF_MDNS=1 00:01:18.557 ++ NET_TYPE=virt 00:01:18.557 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:18.557 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:18.557 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:18.557 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.557 ++ RUN_NIGHTLY=1 00:01:18.557 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:18.557 + nvme_files=() 00:01:18.557 + declare -A nvme_files 00:01:18.557 + backend_dir=/var/lib/libvirt/images/backends 00:01:18.557 + nvme_files['nvme.img']=5G 00:01:18.557 + nvme_files['nvme-cmb.img']=5G 00:01:18.557 + nvme_files['nvme-multi0.img']=4G 00:01:18.557 + nvme_files['nvme-multi1.img']=4G 00:01:18.557 + nvme_files['nvme-multi2.img']=4G 00:01:18.557 + nvme_files['nvme-openstack.img']=8G 00:01:18.557 + nvme_files['nvme-zns.img']=5G 00:01:18.557 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:18.557 + (( SPDK_TEST_FTL == 1 )) 00:01:18.557 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:18.557 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:18.557 + for nvme in "${!nvme_files[@]}" 00:01:18.557 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:18.557 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.557 + for nvme in "${!nvme_files[@]}" 00:01:18.557 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:18.557 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:18.557 + for nvme in "${!nvme_files[@]}" 00:01:18.557 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:18.557 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:18.557 + for nvme in "${!nvme_files[@]}" 00:01:18.557 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:18.557 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:18.557 + for nvme in "${!nvme_files[@]}" 00:01:18.557 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:18.557 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.557 + for nvme in "${!nvme_files[@]}" 00:01:18.557 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:18.816 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.816 + for nvme in "${!nvme_files[@]}" 00:01:18.816 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:18.816 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:18.816 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:18.816 + echo 'End stage prepare_nvme.sh' 00:01:18.816 End stage prepare_nvme.sh 00:01:18.830 [Pipeline] sh 00:01:19.117 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:19.117 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:01:19.117 00:01:19.117 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:19.117 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:19.117 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:19.117 HELP=0 00:01:19.117 DRY_RUN=0 00:01:19.117 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:01:19.117 NVME_DISKS_TYPE=nvme,nvme, 00:01:19.117 NVME_AUTO_CREATE=0 00:01:19.117 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:01:19.117 NVME_CMB=,, 00:01:19.117 NVME_PMR=,, 00:01:19.117 NVME_ZNS=,, 00:01:19.117 NVME_MS=,, 00:01:19.117 NVME_FDP=,, 00:01:19.117 SPDK_VAGRANT_DISTRO=fedora39 00:01:19.117 SPDK_VAGRANT_VMCPU=10 00:01:19.117 SPDK_VAGRANT_VMRAM=12288 00:01:19.117 SPDK_VAGRANT_PROVIDER=libvirt 00:01:19.117 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:19.117 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:19.117 SPDK_OPENSTACK_NETWORK=0 00:01:19.117 VAGRANT_PACKAGE_BOX=0 00:01:19.117 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:19.117 FORCE_DISTRO=true 00:01:19.117 VAGRANT_BOX_VERSION= 00:01:19.117 EXTRA_VAGRANTFILES= 00:01:19.117 NIC_MODEL=e1000 00:01:19.117 00:01:19.117 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:01:19.117 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:21.667 Bringing machine 'default' up with 'libvirt' provider... 00:01:22.621 ==> default: Creating image (snapshot of base box volume). 00:01:22.621 ==> default: Creating domain with the following settings... 00:01:22.621 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732593443_a60c821fd9bba1cbda7f 00:01:22.621 ==> default: -- Domain type: kvm 00:01:22.621 ==> default: -- Cpus: 10 00:01:22.621 ==> default: -- Feature: acpi 00:01:22.621 ==> default: -- Feature: apic 00:01:22.621 ==> default: -- Feature: pae 00:01:22.621 ==> default: -- Memory: 12288M 00:01:22.621 ==> default: -- Memory Backing: hugepages: 00:01:22.621 ==> default: -- Management MAC: 00:01:22.621 ==> default: -- Loader: 00:01:22.621 ==> default: -- Nvram: 00:01:22.621 ==> default: -- Base box: spdk/fedora39 00:01:22.621 ==> default: -- Storage pool: default 00:01:22.621 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732593443_a60c821fd9bba1cbda7f.img (20G) 00:01:22.621 ==> default: -- Volume Cache: default 00:01:22.621 ==> default: -- Kernel: 00:01:22.621 ==> default: -- Initrd: 00:01:22.621 ==> default: -- Graphics Type: vnc 00:01:22.621 ==> default: -- Graphics Port: -1 00:01:22.621 ==> default: -- Graphics IP: 127.0.0.1 00:01:22.621 ==> default: -- Graphics Password: Not defined 00:01:22.621 ==> default: -- Video Type: cirrus 00:01:22.621 ==> default: -- Video VRAM: 9216 00:01:22.621 ==> default: -- Sound Type: 00:01:22.621 ==> default: -- Keymap: en-us 00:01:22.621 ==> default: -- TPM Path: 00:01:22.621 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:22.621 ==> default: -- Command line args: 00:01:22.621 ==> default: -> value=-device, 00:01:22.621 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:22.621 ==> default: -> value=-drive, 00:01:22.621 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:01:22.621 ==> default: -> value=-device, 00:01:22.621 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.621 ==> default: -> value=-device, 00:01:22.621 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:22.621 ==> default: -> value=-drive, 00:01:22.621 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:22.621 ==> default: -> value=-device, 00:01:22.621 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.621 ==> default: -> value=-drive, 00:01:22.621 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:22.621 ==> default: -> value=-device, 00:01:22.621 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.621 ==> default: -> value=-drive, 00:01:22.621 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:22.621 ==> default: -> value=-device, 00:01:22.621 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.880 ==> default: Creating shared folders metadata... 00:01:22.880 ==> default: Starting domain. 00:01:24.785 ==> default: Waiting for domain to get an IP address... 00:01:39.669 ==> default: Waiting for SSH to become available... 00:01:40.604 ==> default: Configuring and enabling network interfaces... 00:01:45.879 default: SSH address: 192.168.121.213:22 00:01:45.879 default: SSH username: vagrant 00:01:45.879 default: SSH auth method: private key 00:01:47.784 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:54.351 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:00.915 ==> default: Mounting SSHFS shared folder... 00:02:02.325 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:02.325 ==> default: Checking Mount.. 00:02:03.261 ==> default: Folder Successfully Mounted! 00:02:03.261 ==> default: Running provisioner: file... 00:02:04.198 default: ~/.gitconfig => .gitconfig 00:02:04.766 00:02:04.766 SUCCESS! 00:02:04.766 00:02:04.766 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:04.766 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:04.766 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:04.766 00:02:04.776 [Pipeline] } 00:02:04.794 [Pipeline] // stage 00:02:04.803 [Pipeline] dir 00:02:04.804 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:02:04.805 [Pipeline] { 00:02:04.821 [Pipeline] catchError 00:02:04.823 [Pipeline] { 00:02:04.837 [Pipeline] sh 00:02:05.118 + vagrant ssh-config --host vagrant 00:02:05.118 + sed -ne /^Host/,$p 00:02:05.118 + tee ssh_conf 00:02:07.649 Host vagrant 00:02:07.649 HostName 192.168.121.213 00:02:07.649 User vagrant 00:02:07.649 Port 22 00:02:07.649 UserKnownHostsFile /dev/null 00:02:07.649 StrictHostKeyChecking no 00:02:07.649 PasswordAuthentication no 00:02:07.649 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:07.649 IdentitiesOnly yes 00:02:07.649 LogLevel FATAL 00:02:07.649 ForwardAgent yes 00:02:07.649 ForwardX11 yes 00:02:07.649 00:02:07.661 [Pipeline] withEnv 00:02:07.663 [Pipeline] { 00:02:07.677 [Pipeline] sh 00:02:07.956 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:07.956 source /etc/os-release 00:02:07.956 [[ -e /image.version ]] && img=$(< /image.version) 00:02:07.956 # Minimal, systemd-like check. 00:02:07.956 if [[ -e /.dockerenv ]]; then 00:02:07.956 # Clear garbage from the node's name: 00:02:07.956 # agt-er_autotest_547-896 -> autotest_547-896 00:02:07.956 # $HOSTNAME is the actual container id 00:02:07.956 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:07.956 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:07.956 # We can assume this is a mount from a host where container is running, 00:02:07.956 # so fetch its hostname to easily identify the target swarm worker. 00:02:07.956 container="$(< /etc/hostname) ($agent)" 00:02:07.956 else 00:02:07.956 # Fallback 00:02:07.956 container=$agent 00:02:07.956 fi 00:02:07.956 fi 00:02:07.956 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:07.956 00:02:08.226 [Pipeline] } 00:02:08.242 [Pipeline] // withEnv 00:02:08.250 [Pipeline] setCustomBuildProperty 00:02:08.265 [Pipeline] stage 00:02:08.267 [Pipeline] { (Tests) 00:02:08.284 [Pipeline] sh 00:02:08.563 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:08.835 [Pipeline] sh 00:02:09.124 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:09.407 [Pipeline] timeout 00:02:09.408 Timeout set to expire in 1 hr 0 min 00:02:09.409 [Pipeline] { 00:02:09.426 [Pipeline] sh 00:02:09.705 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:10.271 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:10.283 [Pipeline] sh 00:02:10.562 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:10.832 [Pipeline] sh 00:02:11.112 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:11.385 [Pipeline] sh 00:02:11.665 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:11.924 ++ readlink -f spdk_repo 00:02:11.924 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:11.924 + [[ -n /home/vagrant/spdk_repo ]] 00:02:11.924 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:11.924 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:11.924 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:11.924 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:11.924 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:11.924 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:11.924 + cd /home/vagrant/spdk_repo 00:02:11.924 + source /etc/os-release 00:02:11.924 ++ NAME='Fedora Linux' 00:02:11.924 ++ VERSION='39 (Cloud Edition)' 00:02:11.924 ++ ID=fedora 00:02:11.924 ++ VERSION_ID=39 00:02:11.924 ++ VERSION_CODENAME= 00:02:11.924 ++ PLATFORM_ID=platform:f39 00:02:11.924 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:11.924 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:11.924 ++ LOGO=fedora-logo-icon 00:02:11.924 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:11.924 ++ HOME_URL=https://fedoraproject.org/ 00:02:11.924 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:11.924 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:11.924 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:11.924 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:11.924 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:11.924 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:11.924 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:11.924 ++ SUPPORT_END=2024-11-12 00:02:11.924 ++ VARIANT='Cloud Edition' 00:02:11.924 ++ VARIANT_ID=cloud 00:02:11.925 + uname -a 00:02:11.925 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:11.925 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:11.925 Hugepages 00:02:11.925 node hugesize free / total 00:02:11.925 node0 1048576kB 0 / 0 00:02:11.925 node0 2048kB 0 / 0 00:02:11.925 00:02:11.925 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:11.925 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:11.925 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:11.925 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:11.925 + rm -f /tmp/spdk-ld-path 00:02:11.925 + source autorun-spdk.conf 00:02:11.925 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.925 ++ SPDK_TEST_NVMF=1 00:02:11.925 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:11.925 ++ SPDK_TEST_USDT=1 00:02:11.925 ++ SPDK_RUN_UBSAN=1 00:02:11.925 ++ SPDK_TEST_NVMF_MDNS=1 00:02:11.925 ++ NET_TYPE=virt 00:02:11.925 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:11.925 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:11.925 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:11.925 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:11.925 ++ RUN_NIGHTLY=1 00:02:11.925 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:11.925 + [[ -n '' ]] 00:02:11.925 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:12.184 + for M in /var/spdk/build-*-manifest.txt 00:02:12.184 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:12.184 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:12.184 + for M in /var/spdk/build-*-manifest.txt 00:02:12.184 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:12.184 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:12.184 + for M in /var/spdk/build-*-manifest.txt 00:02:12.184 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:12.184 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:12.184 ++ uname 00:02:12.184 + [[ Linux == \L\i\n\u\x ]] 00:02:12.184 + sudo dmesg -T 00:02:12.184 + sudo dmesg --clear 00:02:12.184 + dmesg_pid=5977 00:02:12.184 + sudo dmesg -Tw 00:02:12.184 + [[ Fedora Linux == FreeBSD ]] 00:02:12.184 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:12.184 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:12.184 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:12.184 + [[ -x /usr/src/fio-static/fio ]] 00:02:12.184 + export FIO_BIN=/usr/src/fio-static/fio 00:02:12.184 + FIO_BIN=/usr/src/fio-static/fio 00:02:12.184 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:12.184 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:12.184 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:12.184 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:12.184 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:12.184 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:12.184 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:12.184 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:12.184 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:12.184 Test configuration: 00:02:12.184 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:12.184 SPDK_TEST_NVMF=1 00:02:12.184 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:12.184 SPDK_TEST_USDT=1 00:02:12.184 SPDK_RUN_UBSAN=1 00:02:12.184 SPDK_TEST_NVMF_MDNS=1 00:02:12.184 NET_TYPE=virt 00:02:12.184 SPDK_JSONRPC_GO_CLIENT=1 00:02:12.184 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:12.184 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:12.184 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:12.184 RUN_NIGHTLY=1 03:58:13 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:12.184 03:58:13 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:12.184 03:58:13 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:12.184 03:58:13 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:12.184 03:58:13 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:12.184 03:58:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.184 03:58:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.184 03:58:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.184 03:58:13 -- paths/export.sh@5 -- $ export PATH 00:02:12.185 03:58:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.185 03:58:13 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:12.185 03:58:13 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:12.185 03:58:13 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732593493.XXXXXX 00:02:12.185 03:58:13 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732593493.KiPmYc 00:02:12.185 03:58:13 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:12.185 03:58:13 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:02:12.185 03:58:13 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:12.185 03:58:13 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:12.185 03:58:13 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:12.185 03:58:13 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:12.185 03:58:13 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:12.185 03:58:13 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:12.185 03:58:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.185 03:58:13 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:02:12.185 03:58:13 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:12.185 03:58:13 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:12.185 03:58:13 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:12.185 03:58:13 -- spdk/autobuild.sh@16 -- $ date -u 00:02:12.444 Tue Nov 26 03:58:13 AM UTC 2024 00:02:12.444 03:58:13 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:12.444 LTS-67-gc13c99a5e 00:02:12.444 03:58:13 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:12.444 03:58:13 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:12.444 03:58:13 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:12.444 03:58:13 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:12.444 03:58:13 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:12.444 03:58:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.444 ************************************ 00:02:12.444 START TEST ubsan 00:02:12.444 ************************************ 00:02:12.444 using ubsan 00:02:12.444 03:58:13 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:12.444 00:02:12.444 real 0m0.000s 00:02:12.444 user 0m0.000s 00:02:12.444 sys 0m0.000s 00:02:12.444 03:58:13 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:12.444 03:58:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.444 ************************************ 00:02:12.444 END TEST ubsan 00:02:12.444 ************************************ 00:02:12.444 03:58:14 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:12.444 03:58:14 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:12.444 03:58:14 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:12.444 03:58:14 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:12.444 03:58:14 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:12.444 03:58:14 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.444 ************************************ 00:02:12.444 START TEST build_native_dpdk 00:02:12.444 ************************************ 00:02:12.444 03:58:14 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:12.444 03:58:14 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:12.444 03:58:14 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:12.444 03:58:14 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:12.444 03:58:14 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:12.444 03:58:14 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:12.444 03:58:14 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:12.444 03:58:14 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:12.444 03:58:14 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:12.444 03:58:14 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:12.444 03:58:14 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:12.444 03:58:14 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:12.444 03:58:14 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:12.444 03:58:14 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:12.444 03:58:14 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:12.444 03:58:14 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:12.444 03:58:14 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:12.444 03:58:14 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:12.444 03:58:14 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:12.444 03:58:14 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:12.444 03:58:14 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:12.444 eeb0605f11 version: 23.11.0 00:02:12.444 238778122a doc: update release notes for 23.11 00:02:12.444 46aa6b3cfc doc: fix description of RSS features 00:02:12.444 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:12.444 7e421ae345 devtools: support skipping forbid rule check 00:02:12.444 03:58:14 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:12.444 03:58:14 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:12.444 03:58:14 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:12.444 03:58:14 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:12.444 03:58:14 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:12.444 03:58:14 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:12.444 03:58:14 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:12.444 03:58:14 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:12.444 03:58:14 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:12.444 03:58:14 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:12.444 03:58:14 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:12.444 03:58:14 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:12.444 03:58:14 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:12.444 03:58:14 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:12.444 03:58:14 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:12.444 03:58:14 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:12.444 03:58:14 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:12.444 03:58:14 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:12.444 03:58:14 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:12.444 03:58:14 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:12.444 03:58:14 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:12.444 03:58:14 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:12.444 03:58:14 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:12.444 03:58:14 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:12.444 03:58:14 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:12.444 03:58:14 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:12.444 03:58:14 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:12.444 03:58:14 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:12.444 03:58:14 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:12.445 03:58:14 -- scripts/common.sh@343 -- $ case "$op" in 00:02:12.445 03:58:14 -- scripts/common.sh@344 -- $ : 1 00:02:12.445 03:58:14 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:12.445 03:58:14 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:12.445 03:58:14 -- scripts/common.sh@364 -- $ decimal 23 00:02:12.445 03:58:14 -- scripts/common.sh@352 -- $ local d=23 00:02:12.445 03:58:14 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:12.445 03:58:14 -- scripts/common.sh@354 -- $ echo 23 00:02:12.445 03:58:14 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:12.445 03:58:14 -- scripts/common.sh@365 -- $ decimal 21 00:02:12.445 03:58:14 -- scripts/common.sh@352 -- $ local d=21 00:02:12.445 03:58:14 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:12.445 03:58:14 -- scripts/common.sh@354 -- $ echo 21 00:02:12.445 03:58:14 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:12.445 03:58:14 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:12.445 03:58:14 -- scripts/common.sh@366 -- $ return 1 00:02:12.445 03:58:14 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:12.445 patching file config/rte_config.h 00:02:12.445 Hunk #1 succeeded at 60 (offset 1 line). 00:02:12.445 03:58:14 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:12.445 03:58:14 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:12.445 03:58:14 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:12.445 03:58:14 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:12.445 03:58:14 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:12.445 03:58:14 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:12.445 03:58:14 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:12.445 03:58:14 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:12.445 03:58:14 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:12.445 03:58:14 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:12.445 03:58:14 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:12.445 03:58:14 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:12.445 03:58:14 -- scripts/common.sh@343 -- $ case "$op" in 00:02:12.445 03:58:14 -- scripts/common.sh@344 -- $ : 1 00:02:12.445 03:58:14 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:12.445 03:58:14 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:12.445 03:58:14 -- scripts/common.sh@364 -- $ decimal 23 00:02:12.445 03:58:14 -- scripts/common.sh@352 -- $ local d=23 00:02:12.445 03:58:14 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:12.445 03:58:14 -- scripts/common.sh@354 -- $ echo 23 00:02:12.445 03:58:14 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:12.445 03:58:14 -- scripts/common.sh@365 -- $ decimal 24 00:02:12.445 03:58:14 -- scripts/common.sh@352 -- $ local d=24 00:02:12.445 03:58:14 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:12.445 03:58:14 -- scripts/common.sh@354 -- $ echo 24 00:02:12.445 03:58:14 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:12.445 03:58:14 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:12.445 03:58:14 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:12.445 03:58:14 -- scripts/common.sh@367 -- $ return 0 00:02:12.445 03:58:14 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:12.445 patching file lib/pcapng/rte_pcapng.c 00:02:12.445 03:58:14 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:12.445 03:58:14 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:12.445 03:58:14 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:12.445 03:58:14 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:12.445 03:58:14 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:19.007 The Meson build system 00:02:19.007 Version: 1.5.0 00:02:19.007 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:19.007 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:19.007 Build type: native build 00:02:19.007 Program cat found: YES (/usr/bin/cat) 00:02:19.007 Project name: DPDK 00:02:19.007 Project version: 23.11.0 00:02:19.007 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:19.007 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:19.007 Host machine cpu family: x86_64 00:02:19.007 Host machine cpu: x86_64 00:02:19.007 Message: ## Building in Developer Mode ## 00:02:19.007 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:19.007 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:19.007 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:19.007 Program python3 found: YES (/usr/bin/python3) 00:02:19.007 Program cat found: YES (/usr/bin/cat) 00:02:19.007 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:19.007 Compiler for C supports arguments -march=native: YES 00:02:19.007 Checking for size of "void *" : 8 00:02:19.007 Checking for size of "void *" : 8 (cached) 00:02:19.007 Library m found: YES 00:02:19.007 Library numa found: YES 00:02:19.007 Has header "numaif.h" : YES 00:02:19.007 Library fdt found: NO 00:02:19.007 Library execinfo found: NO 00:02:19.007 Has header "execinfo.h" : YES 00:02:19.007 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:19.007 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:19.007 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:19.007 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:19.007 Run-time dependency openssl found: YES 3.1.1 00:02:19.007 Run-time dependency libpcap found: YES 1.10.4 00:02:19.008 Has header "pcap.h" with dependency libpcap: YES 00:02:19.008 Compiler for C supports arguments -Wcast-qual: YES 00:02:19.008 Compiler for C supports arguments -Wdeprecated: YES 00:02:19.008 Compiler for C supports arguments -Wformat: YES 00:02:19.008 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:19.008 Compiler for C supports arguments -Wformat-security: NO 00:02:19.008 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:19.008 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:19.008 Compiler for C supports arguments -Wnested-externs: YES 00:02:19.008 Compiler for C supports arguments -Wold-style-definition: YES 00:02:19.008 Compiler for C supports arguments -Wpointer-arith: YES 00:02:19.008 Compiler for C supports arguments -Wsign-compare: YES 00:02:19.008 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:19.008 Compiler for C supports arguments -Wundef: YES 00:02:19.008 Compiler for C supports arguments -Wwrite-strings: YES 00:02:19.008 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:19.008 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:19.008 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:19.008 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:19.008 Program objdump found: YES (/usr/bin/objdump) 00:02:19.008 Compiler for C supports arguments -mavx512f: YES 00:02:19.008 Checking if "AVX512 checking" compiles: YES 00:02:19.008 Fetching value of define "__SSE4_2__" : 1 00:02:19.008 Fetching value of define "__AES__" : 1 00:02:19.008 Fetching value of define "__AVX__" : 1 00:02:19.008 Fetching value of define "__AVX2__" : 1 00:02:19.008 Fetching value of define "__AVX512BW__" : (undefined) 00:02:19.008 Fetching value of define "__AVX512CD__" : (undefined) 00:02:19.008 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:19.008 Fetching value of define "__AVX512F__" : (undefined) 00:02:19.008 Fetching value of define "__AVX512VL__" : (undefined) 00:02:19.008 Fetching value of define "__PCLMUL__" : 1 00:02:19.008 Fetching value of define "__RDRND__" : 1 00:02:19.008 Fetching value of define "__RDSEED__" : 1 00:02:19.008 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:19.008 Fetching value of define "__znver1__" : (undefined) 00:02:19.008 Fetching value of define "__znver2__" : (undefined) 00:02:19.008 Fetching value of define "__znver3__" : (undefined) 00:02:19.008 Fetching value of define "__znver4__" : (undefined) 00:02:19.008 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:19.008 Message: lib/log: Defining dependency "log" 00:02:19.008 Message: lib/kvargs: Defining dependency "kvargs" 00:02:19.008 Message: lib/telemetry: Defining dependency "telemetry" 00:02:19.008 Checking for function "getentropy" : NO 00:02:19.008 Message: lib/eal: Defining dependency "eal" 00:02:19.008 Message: lib/ring: Defining dependency "ring" 00:02:19.008 Message: lib/rcu: Defining dependency "rcu" 00:02:19.008 Message: lib/mempool: Defining dependency "mempool" 00:02:19.008 Message: lib/mbuf: Defining dependency "mbuf" 00:02:19.008 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:19.008 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:19.008 Compiler for C supports arguments -mpclmul: YES 00:02:19.008 Compiler for C supports arguments -maes: YES 00:02:19.008 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:19.008 Compiler for C supports arguments -mavx512bw: YES 00:02:19.008 Compiler for C supports arguments -mavx512dq: YES 00:02:19.008 Compiler for C supports arguments -mavx512vl: YES 00:02:19.008 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:19.008 Compiler for C supports arguments -mavx2: YES 00:02:19.008 Compiler for C supports arguments -mavx: YES 00:02:19.008 Message: lib/net: Defining dependency "net" 00:02:19.008 Message: lib/meter: Defining dependency "meter" 00:02:19.008 Message: lib/ethdev: Defining dependency "ethdev" 00:02:19.008 Message: lib/pci: Defining dependency "pci" 00:02:19.008 Message: lib/cmdline: Defining dependency "cmdline" 00:02:19.008 Message: lib/metrics: Defining dependency "metrics" 00:02:19.008 Message: lib/hash: Defining dependency "hash" 00:02:19.008 Message: lib/timer: Defining dependency "timer" 00:02:19.008 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:19.008 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:19.008 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:19.008 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:19.008 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:19.008 Message: lib/acl: Defining dependency "acl" 00:02:19.008 Message: lib/bbdev: Defining dependency "bbdev" 00:02:19.008 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:19.008 Run-time dependency libelf found: YES 0.191 00:02:19.008 Message: lib/bpf: Defining dependency "bpf" 00:02:19.008 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:19.008 Message: lib/compressdev: Defining dependency "compressdev" 00:02:19.008 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:19.008 Message: lib/distributor: Defining dependency "distributor" 00:02:19.008 Message: lib/dmadev: Defining dependency "dmadev" 00:02:19.008 Message: lib/efd: Defining dependency "efd" 00:02:19.008 Message: lib/eventdev: Defining dependency "eventdev" 00:02:19.008 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:19.008 Message: lib/gpudev: Defining dependency "gpudev" 00:02:19.008 Message: lib/gro: Defining dependency "gro" 00:02:19.008 Message: lib/gso: Defining dependency "gso" 00:02:19.008 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:19.008 Message: lib/jobstats: Defining dependency "jobstats" 00:02:19.008 Message: lib/latencystats: Defining dependency "latencystats" 00:02:19.008 Message: lib/lpm: Defining dependency "lpm" 00:02:19.008 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:19.008 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:19.008 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:19.008 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:19.008 Message: lib/member: Defining dependency "member" 00:02:19.008 Message: lib/pcapng: Defining dependency "pcapng" 00:02:19.008 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:19.008 Message: lib/power: Defining dependency "power" 00:02:19.008 Message: lib/rawdev: Defining dependency "rawdev" 00:02:19.008 Message: lib/regexdev: Defining dependency "regexdev" 00:02:19.008 Message: lib/mldev: Defining dependency "mldev" 00:02:19.008 Message: lib/rib: Defining dependency "rib" 00:02:19.008 Message: lib/reorder: Defining dependency "reorder" 00:02:19.008 Message: lib/sched: Defining dependency "sched" 00:02:19.008 Message: lib/security: Defining dependency "security" 00:02:19.008 Message: lib/stack: Defining dependency "stack" 00:02:19.008 Has header "linux/userfaultfd.h" : YES 00:02:19.008 Has header "linux/vduse.h" : YES 00:02:19.008 Message: lib/vhost: Defining dependency "vhost" 00:02:19.008 Message: lib/ipsec: Defining dependency "ipsec" 00:02:19.008 Message: lib/pdcp: Defining dependency "pdcp" 00:02:19.008 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:19.008 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:19.008 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:19.008 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:19.008 Message: lib/fib: Defining dependency "fib" 00:02:19.008 Message: lib/port: Defining dependency "port" 00:02:19.008 Message: lib/pdump: Defining dependency "pdump" 00:02:19.008 Message: lib/table: Defining dependency "table" 00:02:19.008 Message: lib/pipeline: Defining dependency "pipeline" 00:02:19.008 Message: lib/graph: Defining dependency "graph" 00:02:19.008 Message: lib/node: Defining dependency "node" 00:02:19.008 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:19.944 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:19.944 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:19.944 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:19.944 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:19.944 Compiler for C supports arguments -Wno-unused-value: YES 00:02:19.944 Compiler for C supports arguments -Wno-format: YES 00:02:19.944 Compiler for C supports arguments -Wno-format-security: YES 00:02:19.944 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:19.944 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:19.944 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:19.944 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:19.944 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:19.944 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:19.944 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:19.944 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:19.944 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:19.944 Has header "sys/epoll.h" : YES 00:02:19.944 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:19.944 Configuring doxy-api-html.conf using configuration 00:02:19.944 Configuring doxy-api-man.conf using configuration 00:02:19.944 Program mandb found: YES (/usr/bin/mandb) 00:02:19.944 Program sphinx-build found: NO 00:02:19.944 Configuring rte_build_config.h using configuration 00:02:19.944 Message: 00:02:19.944 ================= 00:02:19.944 Applications Enabled 00:02:19.944 ================= 00:02:19.944 00:02:19.944 apps: 00:02:19.944 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:19.944 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:19.944 test-pmd, test-regex, test-sad, test-security-perf, 00:02:19.944 00:02:19.944 Message: 00:02:19.944 ================= 00:02:19.944 Libraries Enabled 00:02:19.944 ================= 00:02:19.944 00:02:19.944 libs: 00:02:19.944 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:19.944 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:19.944 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:19.944 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:19.944 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:19.944 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:19.944 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:19.944 00:02:19.944 00:02:19.944 Message: 00:02:19.944 =============== 00:02:19.944 Drivers Enabled 00:02:19.944 =============== 00:02:19.944 00:02:19.944 common: 00:02:19.944 00:02:19.944 bus: 00:02:19.944 pci, vdev, 00:02:19.944 mempool: 00:02:19.944 ring, 00:02:19.944 dma: 00:02:19.944 00:02:19.944 net: 00:02:19.944 i40e, 00:02:19.944 raw: 00:02:19.944 00:02:19.944 crypto: 00:02:19.944 00:02:19.944 compress: 00:02:19.944 00:02:19.944 regex: 00:02:19.944 00:02:19.944 ml: 00:02:19.944 00:02:19.944 vdpa: 00:02:19.944 00:02:19.944 event: 00:02:19.944 00:02:19.944 baseband: 00:02:19.944 00:02:19.944 gpu: 00:02:19.944 00:02:19.944 00:02:19.944 Message: 00:02:19.944 ================= 00:02:19.944 Content Skipped 00:02:19.944 ================= 00:02:19.944 00:02:19.944 apps: 00:02:19.944 00:02:19.944 libs: 00:02:19.944 00:02:19.944 drivers: 00:02:19.944 common/cpt: not in enabled drivers build config 00:02:19.944 common/dpaax: not in enabled drivers build config 00:02:19.944 common/iavf: not in enabled drivers build config 00:02:19.944 common/idpf: not in enabled drivers build config 00:02:19.944 common/mvep: not in enabled drivers build config 00:02:19.944 common/octeontx: not in enabled drivers build config 00:02:19.944 bus/auxiliary: not in enabled drivers build config 00:02:19.944 bus/cdx: not in enabled drivers build config 00:02:19.944 bus/dpaa: not in enabled drivers build config 00:02:19.944 bus/fslmc: not in enabled drivers build config 00:02:19.944 bus/ifpga: not in enabled drivers build config 00:02:19.944 bus/platform: not in enabled drivers build config 00:02:19.944 bus/vmbus: not in enabled drivers build config 00:02:19.944 common/cnxk: not in enabled drivers build config 00:02:19.944 common/mlx5: not in enabled drivers build config 00:02:19.944 common/nfp: not in enabled drivers build config 00:02:19.944 common/qat: not in enabled drivers build config 00:02:19.944 common/sfc_efx: not in enabled drivers build config 00:02:19.944 mempool/bucket: not in enabled drivers build config 00:02:19.944 mempool/cnxk: not in enabled drivers build config 00:02:19.944 mempool/dpaa: not in enabled drivers build config 00:02:19.944 mempool/dpaa2: not in enabled drivers build config 00:02:19.944 mempool/octeontx: not in enabled drivers build config 00:02:19.944 mempool/stack: not in enabled drivers build config 00:02:19.944 dma/cnxk: not in enabled drivers build config 00:02:19.944 dma/dpaa: not in enabled drivers build config 00:02:19.944 dma/dpaa2: not in enabled drivers build config 00:02:19.944 dma/hisilicon: not in enabled drivers build config 00:02:19.944 dma/idxd: not in enabled drivers build config 00:02:19.944 dma/ioat: not in enabled drivers build config 00:02:19.944 dma/skeleton: not in enabled drivers build config 00:02:19.944 net/af_packet: not in enabled drivers build config 00:02:19.944 net/af_xdp: not in enabled drivers build config 00:02:19.944 net/ark: not in enabled drivers build config 00:02:19.944 net/atlantic: not in enabled drivers build config 00:02:19.944 net/avp: not in enabled drivers build config 00:02:19.944 net/axgbe: not in enabled drivers build config 00:02:19.944 net/bnx2x: not in enabled drivers build config 00:02:19.944 net/bnxt: not in enabled drivers build config 00:02:19.944 net/bonding: not in enabled drivers build config 00:02:19.944 net/cnxk: not in enabled drivers build config 00:02:19.944 net/cpfl: not in enabled drivers build config 00:02:19.944 net/cxgbe: not in enabled drivers build config 00:02:19.944 net/dpaa: not in enabled drivers build config 00:02:19.944 net/dpaa2: not in enabled drivers build config 00:02:19.944 net/e1000: not in enabled drivers build config 00:02:19.944 net/ena: not in enabled drivers build config 00:02:19.944 net/enetc: not in enabled drivers build config 00:02:19.944 net/enetfec: not in enabled drivers build config 00:02:19.944 net/enic: not in enabled drivers build config 00:02:19.944 net/failsafe: not in enabled drivers build config 00:02:19.944 net/fm10k: not in enabled drivers build config 00:02:19.944 net/gve: not in enabled drivers build config 00:02:19.944 net/hinic: not in enabled drivers build config 00:02:19.944 net/hns3: not in enabled drivers build config 00:02:19.944 net/iavf: not in enabled drivers build config 00:02:19.944 net/ice: not in enabled drivers build config 00:02:19.944 net/idpf: not in enabled drivers build config 00:02:19.944 net/igc: not in enabled drivers build config 00:02:19.944 net/ionic: not in enabled drivers build config 00:02:19.944 net/ipn3ke: not in enabled drivers build config 00:02:19.944 net/ixgbe: not in enabled drivers build config 00:02:19.944 net/mana: not in enabled drivers build config 00:02:19.944 net/memif: not in enabled drivers build config 00:02:19.944 net/mlx4: not in enabled drivers build config 00:02:19.944 net/mlx5: not in enabled drivers build config 00:02:19.944 net/mvneta: not in enabled drivers build config 00:02:19.944 net/mvpp2: not in enabled drivers build config 00:02:19.944 net/netvsc: not in enabled drivers build config 00:02:19.944 net/nfb: not in enabled drivers build config 00:02:19.944 net/nfp: not in enabled drivers build config 00:02:19.944 net/ngbe: not in enabled drivers build config 00:02:19.944 net/null: not in enabled drivers build config 00:02:19.944 net/octeontx: not in enabled drivers build config 00:02:19.944 net/octeon_ep: not in enabled drivers build config 00:02:19.944 net/pcap: not in enabled drivers build config 00:02:19.944 net/pfe: not in enabled drivers build config 00:02:19.944 net/qede: not in enabled drivers build config 00:02:19.944 net/ring: not in enabled drivers build config 00:02:19.944 net/sfc: not in enabled drivers build config 00:02:19.944 net/softnic: not in enabled drivers build config 00:02:19.944 net/tap: not in enabled drivers build config 00:02:19.944 net/thunderx: not in enabled drivers build config 00:02:19.944 net/txgbe: not in enabled drivers build config 00:02:19.944 net/vdev_netvsc: not in enabled drivers build config 00:02:19.944 net/vhost: not in enabled drivers build config 00:02:19.944 net/virtio: not in enabled drivers build config 00:02:19.944 net/vmxnet3: not in enabled drivers build config 00:02:19.944 raw/cnxk_bphy: not in enabled drivers build config 00:02:19.944 raw/cnxk_gpio: not in enabled drivers build config 00:02:19.944 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:19.944 raw/ifpga: not in enabled drivers build config 00:02:19.944 raw/ntb: not in enabled drivers build config 00:02:19.944 raw/skeleton: not in enabled drivers build config 00:02:19.944 crypto/armv8: not in enabled drivers build config 00:02:19.944 crypto/bcmfs: not in enabled drivers build config 00:02:19.945 crypto/caam_jr: not in enabled drivers build config 00:02:19.945 crypto/ccp: not in enabled drivers build config 00:02:19.945 crypto/cnxk: not in enabled drivers build config 00:02:19.945 crypto/dpaa_sec: not in enabled drivers build config 00:02:19.945 crypto/dpaa2_sec: not in enabled drivers build config 00:02:19.945 crypto/ipsec_mb: not in enabled drivers build config 00:02:19.945 crypto/mlx5: not in enabled drivers build config 00:02:19.945 crypto/mvsam: not in enabled drivers build config 00:02:19.945 crypto/nitrox: not in enabled drivers build config 00:02:19.945 crypto/null: not in enabled drivers build config 00:02:19.945 crypto/octeontx: not in enabled drivers build config 00:02:19.945 crypto/openssl: not in enabled drivers build config 00:02:19.945 crypto/scheduler: not in enabled drivers build config 00:02:19.945 crypto/uadk: not in enabled drivers build config 00:02:19.945 crypto/virtio: not in enabled drivers build config 00:02:19.945 compress/isal: not in enabled drivers build config 00:02:19.945 compress/mlx5: not in enabled drivers build config 00:02:19.945 compress/octeontx: not in enabled drivers build config 00:02:19.945 compress/zlib: not in enabled drivers build config 00:02:19.945 regex/mlx5: not in enabled drivers build config 00:02:19.945 regex/cn9k: not in enabled drivers build config 00:02:19.945 ml/cnxk: not in enabled drivers build config 00:02:19.945 vdpa/ifc: not in enabled drivers build config 00:02:19.945 vdpa/mlx5: not in enabled drivers build config 00:02:19.945 vdpa/nfp: not in enabled drivers build config 00:02:19.945 vdpa/sfc: not in enabled drivers build config 00:02:19.945 event/cnxk: not in enabled drivers build config 00:02:19.945 event/dlb2: not in enabled drivers build config 00:02:19.945 event/dpaa: not in enabled drivers build config 00:02:19.945 event/dpaa2: not in enabled drivers build config 00:02:19.945 event/dsw: not in enabled drivers build config 00:02:19.945 event/opdl: not in enabled drivers build config 00:02:19.945 event/skeleton: not in enabled drivers build config 00:02:19.945 event/sw: not in enabled drivers build config 00:02:19.945 event/octeontx: not in enabled drivers build config 00:02:19.945 baseband/acc: not in enabled drivers build config 00:02:19.945 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:19.945 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:19.945 baseband/la12xx: not in enabled drivers build config 00:02:19.945 baseband/null: not in enabled drivers build config 00:02:19.945 baseband/turbo_sw: not in enabled drivers build config 00:02:19.945 gpu/cuda: not in enabled drivers build config 00:02:19.945 00:02:19.945 00:02:19.945 Build targets in project: 220 00:02:19.945 00:02:19.945 DPDK 23.11.0 00:02:19.945 00:02:19.945 User defined options 00:02:19.945 libdir : lib 00:02:19.945 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:19.945 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:19.945 c_link_args : 00:02:19.945 enable_docs : false 00:02:19.945 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:19.945 enable_kmods : false 00:02:19.945 machine : native 00:02:19.945 tests : false 00:02:19.945 00:02:19.945 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:19.945 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:19.945 03:58:21 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:19.945 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:19.945 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:19.945 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:19.945 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:20.203 [4/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:20.203 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:20.203 [6/710] Linking static target lib/librte_kvargs.a 00:02:20.203 [7/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:20.203 [8/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:20.203 [9/710] Linking static target lib/librte_log.a 00:02:20.203 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:20.203 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.462 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:20.462 [13/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:20.720 [14/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.720 [15/710] Linking target lib/librte_log.so.24.0 00:02:20.720 [16/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:20.720 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:20.720 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:20.720 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:20.979 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:20.979 [21/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:20.979 [22/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:20.979 [23/710] Linking target lib/librte_kvargs.so.24.0 00:02:20.979 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:21.237 [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:21.237 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:21.237 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:21.237 [28/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:21.237 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:21.237 [30/710] Linking static target lib/librte_telemetry.a 00:02:21.237 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:21.496 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:21.496 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:21.755 [34/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.755 [35/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:21.755 [36/710] Linking target lib/librte_telemetry.so.24.0 00:02:21.755 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:21.755 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:21.755 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:21.755 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:21.755 [41/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:21.755 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:21.755 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:22.014 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:22.014 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:22.014 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:22.273 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:22.273 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:22.273 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:22.273 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:22.273 [51/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:22.532 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:22.532 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:22.532 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:22.532 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:22.532 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:22.791 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:22.791 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:22.791 [59/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:22.791 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:22.791 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:22.791 [62/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:22.791 [63/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:23.050 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:23.050 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:23.050 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:23.050 [67/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:23.050 [68/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:23.308 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:23.308 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:23.309 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:23.309 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:23.309 [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:23.309 [74/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:23.309 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:23.568 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:23.568 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:23.568 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:23.568 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:23.827 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:23.828 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:23.828 [82/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:24.087 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:24.087 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:24.087 [85/710] Linking static target lib/librte_ring.a 00:02:24.087 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:24.087 [87/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:24.346 [88/710] Linking static target lib/librte_eal.a 00:02:24.346 [89/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:24.346 [90/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.346 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:24.346 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:24.346 [93/710] Linking static target lib/librte_mempool.a 00:02:24.605 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:24.605 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:24.605 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:24.605 [97/710] Linking static target lib/librte_rcu.a 00:02:24.605 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:24.864 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:24.864 [100/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:24.864 [101/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.864 [102/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:25.123 [103/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.123 [104/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:25.123 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:25.123 [106/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:25.123 [107/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:25.123 [108/710] Linking static target lib/librte_mbuf.a 00:02:25.382 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:25.382 [110/710] Linking static target lib/librte_net.a 00:02:25.382 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:25.382 [112/710] Linking static target lib/librte_meter.a 00:02:25.640 [113/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.640 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:25.641 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:25.641 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:25.641 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.641 [118/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.641 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:26.208 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:26.208 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:26.466 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:26.725 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:26.725 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:26.725 [125/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:26.725 [126/710] Linking static target lib/librte_pci.a 00:02:26.725 [127/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:26.725 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:26.984 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:26.984 [130/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:26.984 [131/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.984 [132/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:26.984 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:26.984 [134/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:26.984 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:26.984 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:26.984 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:26.984 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:26.984 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:26.984 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:27.243 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:27.243 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:27.503 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:27.503 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:27.503 [145/710] Linking static target lib/librte_cmdline.a 00:02:27.503 [146/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:27.762 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:27.762 [148/710] Linking static target lib/librte_metrics.a 00:02:27.762 [149/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:27.762 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:28.020 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.279 [152/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.279 [153/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:28.279 [154/710] Linking static target lib/librte_timer.a 00:02:28.279 [155/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:28.537 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.796 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:28.796 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:29.055 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:29.055 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:29.622 [161/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:29.622 [162/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:29.622 [163/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:29.622 [164/710] Linking static target lib/librte_ethdev.a 00:02:29.622 [165/710] Linking static target lib/librte_bitratestats.a 00:02:29.622 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:29.622 [167/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:29.622 [168/710] Linking static target lib/librte_hash.a 00:02:29.622 [169/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:29.881 [170/710] Linking static target lib/librte_bbdev.a 00:02:29.881 [171/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.881 [172/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.881 [173/710] Linking target lib/librte_eal.so.24.0 00:02:29.881 [174/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:30.140 [175/710] Linking target lib/librte_ring.so.24.0 00:02:30.140 [176/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:30.140 [177/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:30.140 [178/710] Linking target lib/librte_pci.so.24.0 00:02:30.140 [179/710] Linking target lib/librte_meter.so.24.0 00:02:30.140 [180/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:30.140 [181/710] Linking target lib/librte_rcu.so.24.0 00:02:30.140 [182/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:30.140 [183/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.140 [184/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:30.398 [185/710] Linking static target lib/acl/libavx2_tmp.a 00:02:30.398 [186/710] Linking target lib/librte_timer.so.24.0 00:02:30.398 [187/710] Linking target lib/librte_mempool.so.24.0 00:02:30.398 [188/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:30.398 [189/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:30.398 [190/710] Linking static target lib/acl/libavx512_tmp.a 00:02:30.398 [191/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.398 [192/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:30.398 [193/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:30.398 [194/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:30.399 [195/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:30.399 [196/710] Linking target lib/librte_mbuf.so.24.0 00:02:30.663 [197/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:30.663 [198/710] Linking target lib/librte_net.so.24.0 00:02:30.663 [199/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:30.663 [200/710] Linking static target lib/librte_acl.a 00:02:30.663 [201/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:30.663 [202/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:30.663 [203/710] Linking target lib/librte_bbdev.so.24.0 00:02:30.663 [204/710] Linking target lib/librte_cmdline.so.24.0 00:02:30.663 [205/710] Linking target lib/librte_hash.so.24.0 00:02:30.663 [206/710] Linking static target lib/librte_cfgfile.a 00:02:30.964 [207/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:30.964 [208/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:30.964 [209/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.964 [210/710] Linking target lib/librte_acl.so.24.0 00:02:30.964 [211/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:31.237 [212/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:31.237 [213/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.237 [214/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:31.237 [215/710] Linking target lib/librte_cfgfile.so.24.0 00:02:31.237 [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:31.496 [217/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:31.496 [218/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:31.496 [219/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:31.496 [220/710] Linking static target lib/librte_bpf.a 00:02:31.756 [221/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:31.756 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:31.756 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:31.756 [224/710] Linking static target lib/librte_compressdev.a 00:02:31.756 [225/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:32.015 [226/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.015 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:32.015 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:32.274 [229/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:32.274 [230/710] Linking static target lib/librte_distributor.a 00:02:32.274 [231/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:32.274 [232/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.274 [233/710] Linking target lib/librte_compressdev.so.24.0 00:02:32.533 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.533 [235/710] Linking target lib/librte_distributor.so.24.0 00:02:32.533 [236/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:32.533 [237/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:32.533 [238/710] Linking static target lib/librte_dmadev.a 00:02:32.792 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.792 [240/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:32.792 [241/710] Linking target lib/librte_dmadev.so.24.0 00:02:33.051 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:33.051 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:33.310 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:33.310 [245/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:33.310 [246/710] Linking static target lib/librte_efd.a 00:02:33.310 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:33.569 [248/710] Linking static target lib/librte_cryptodev.a 00:02:33.569 [249/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:33.569 [250/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.569 [251/710] Linking target lib/librte_efd.so.24.0 00:02:33.829 [252/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:33.829 [253/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:33.829 [254/710] Linking static target lib/librte_dispatcher.a 00:02:33.829 [255/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.091 [256/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:34.091 [257/710] Linking static target lib/librte_gpudev.a 00:02:34.091 [258/710] Linking target lib/librte_ethdev.so.24.0 00:02:34.091 [259/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:34.091 [260/710] Linking target lib/librte_metrics.so.24.0 00:02:34.091 [261/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:34.352 [262/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.352 [263/710] Linking target lib/librte_bpf.so.24.0 00:02:34.352 [264/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:34.352 [265/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:34.352 [266/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:34.352 [267/710] Linking target lib/librte_bitratestats.so.24.0 00:02:34.352 [268/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:34.611 [269/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:34.611 [270/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.611 [271/710] Linking target lib/librte_cryptodev.so.24.0 00:02:34.611 [272/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:34.871 [273/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:34.871 [274/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.871 [275/710] Linking target lib/librte_gpudev.so.24.0 00:02:34.871 [276/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:34.871 [277/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:35.130 [278/710] Linking static target lib/librte_eventdev.a 00:02:35.130 [279/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:35.130 [280/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:35.130 [281/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:35.130 [282/710] Linking static target lib/librte_gro.a 00:02:35.130 [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:35.130 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:35.389 [285/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.389 [286/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:35.389 [287/710] Linking target lib/librte_gro.so.24.0 00:02:35.389 [288/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:35.389 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:35.389 [290/710] Linking static target lib/librte_gso.a 00:02:35.648 [291/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.648 [292/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:35.648 [293/710] Linking target lib/librte_gso.so.24.0 00:02:35.648 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:35.907 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:35.907 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:35.907 [297/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:35.907 [298/710] Linking static target lib/librte_jobstats.a 00:02:35.907 [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:36.166 [300/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:36.166 [301/710] Linking static target lib/librte_latencystats.a 00:02:36.166 [302/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:36.166 [303/710] Linking static target lib/librte_ip_frag.a 00:02:36.166 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.166 [305/710] Linking target lib/librte_jobstats.so.24.0 00:02:36.166 [306/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.425 [307/710] Linking target lib/librte_latencystats.so.24.0 00:02:36.425 [308/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.425 [309/710] Linking target lib/librte_ip_frag.so.24.0 00:02:36.425 [310/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:36.425 [311/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:36.425 [312/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:36.425 [313/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:36.425 [314/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:36.425 [315/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:36.684 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:36.684 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:36.943 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.943 [319/710] Linking target lib/librte_eventdev.so.24.0 00:02:36.943 [320/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:36.943 [321/710] Linking static target lib/librte_lpm.a 00:02:36.943 [322/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:36.943 [323/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:36.943 [324/710] Linking target lib/librte_dispatcher.so.24.0 00:02:37.203 [325/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:37.203 [326/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:37.203 [327/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:37.203 [328/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:37.203 [329/710] Linking static target lib/librte_pcapng.a 00:02:37.203 [330/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.203 [331/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:37.203 [332/710] Linking target lib/librte_lpm.so.24.0 00:02:37.203 [333/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:37.462 [334/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:37.462 [335/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.462 [336/710] Linking target lib/librte_pcapng.so.24.0 00:02:37.462 [337/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:37.719 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:37.719 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:37.719 [340/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:37.978 [341/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:37.978 [342/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:37.978 [343/710] Linking static target lib/librte_power.a 00:02:37.978 [344/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:37.978 [345/710] Linking static target lib/librte_rawdev.a 00:02:37.978 [346/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:37.978 [347/710] Linking static target lib/librte_regexdev.a 00:02:37.978 [348/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:37.978 [349/710] Linking static target lib/librte_member.a 00:02:37.978 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:38.237 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:38.237 [352/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:38.237 [353/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.237 [354/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:38.237 [355/710] Linking static target lib/librte_mldev.a 00:02:38.237 [356/710] Linking target lib/librte_member.so.24.0 00:02:38.496 [357/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.496 [358/710] Linking target lib/librte_rawdev.so.24.0 00:02:38.496 [359/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:38.496 [360/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.496 [361/710] Linking target lib/librte_power.so.24.0 00:02:38.496 [362/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:38.756 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.756 [364/710] Linking target lib/librte_regexdev.so.24.0 00:02:38.756 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:38.756 [366/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:39.015 [367/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:39.015 [368/710] Linking static target lib/librte_reorder.a 00:02:39.015 [369/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:39.015 [370/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:39.015 [371/710] Linking static target lib/librte_rib.a 00:02:39.015 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:39.015 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:39.273 [374/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.273 [375/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:39.274 [376/710] Linking static target lib/librte_stack.a 00:02:39.274 [377/710] Linking target lib/librte_reorder.so.24.0 00:02:39.274 [378/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:39.274 [379/710] Linking static target lib/librte_security.a 00:02:39.274 [380/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:39.274 [381/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.532 [382/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.533 [383/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.533 [384/710] Linking target lib/librte_rib.so.24.0 00:02:39.533 [385/710] Linking target lib/librte_stack.so.24.0 00:02:39.533 [386/710] Linking target lib/librte_mldev.so.24.0 00:02:39.533 [387/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:39.533 [388/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.533 [389/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:39.533 [390/710] Linking target lib/librte_security.so.24.0 00:02:39.792 [391/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:39.792 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:39.792 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:40.051 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:40.051 [395/710] Linking static target lib/librte_sched.a 00:02:40.310 [396/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:40.310 [397/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.310 [398/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:40.310 [399/710] Linking target lib/librte_sched.so.24.0 00:02:40.570 [400/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:40.570 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:40.570 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:40.829 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:40.829 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:41.088 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:41.088 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:41.088 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:41.348 [408/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:41.348 [409/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:41.348 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:41.606 [411/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:41.606 [412/710] Linking static target lib/librte_ipsec.a 00:02:41.606 [413/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:41.864 [414/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.864 [415/710] Linking target lib/librte_ipsec.so.24.0 00:02:41.864 [416/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:41.864 [417/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:41.864 [418/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:41.864 [419/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:41.864 [420/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:41.864 [421/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:41.864 [422/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:42.122 [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:42.691 [424/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:42.691 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:42.691 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:42.691 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:42.691 [428/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:42.950 [429/710] Linking static target lib/librte_pdcp.a 00:02:42.950 [430/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:42.950 [431/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:42.950 [432/710] Linking static target lib/librte_fib.a 00:02:43.209 [433/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.209 [434/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.209 [435/710] Linking target lib/librte_pdcp.so.24.0 00:02:43.209 [436/710] Linking target lib/librte_fib.so.24.0 00:02:43.209 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:43.777 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:43.777 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:43.777 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:43.777 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:43.777 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:44.037 [443/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:44.037 [444/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:44.037 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:44.296 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:44.296 [447/710] Linking static target lib/librte_port.a 00:02:44.296 [448/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:44.555 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:44.555 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:44.555 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:44.555 [452/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:44.555 [453/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:44.814 [454/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.814 [455/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:44.814 [456/710] Linking target lib/librte_port.so.24.0 00:02:44.814 [457/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:44.814 [458/710] Linking static target lib/librte_pdump.a 00:02:44.814 [459/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:45.073 [460/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:45.073 [461/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.073 [462/710] Linking target lib/librte_pdump.so.24.0 00:02:45.332 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:45.591 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:45.591 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:45.591 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:45.591 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:45.591 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:45.850 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:45.850 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:45.850 [471/710] Linking static target lib/librte_table.a 00:02:46.109 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:46.109 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:46.368 [474/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.368 [475/710] Linking target lib/librte_table.so.24.0 00:02:46.627 [476/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:46.627 [477/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:46.627 [478/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:46.886 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:46.886 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:47.145 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:47.145 [482/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:47.145 [483/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:47.405 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:47.405 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:47.405 [486/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:47.664 [487/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:47.664 [488/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:47.922 [489/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:47.922 [490/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:47.922 [491/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:47.922 [492/710] Linking static target lib/librte_graph.a 00:02:48.181 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:48.440 [494/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:48.698 [495/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.698 [496/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:48.698 [497/710] Linking target lib/librte_graph.so.24.0 00:02:48.698 [498/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:48.698 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:48.957 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:48.957 [501/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:49.216 [502/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:49.216 [503/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:49.216 [504/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:49.216 [505/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:49.216 [506/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:49.475 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:49.734 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:49.734 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:49.734 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:49.734 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:49.734 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:49.734 [513/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:49.993 [514/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:49.993 [515/710] Linking static target lib/librte_node.a 00:02:50.252 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.252 [517/710] Linking target lib/librte_node.so.24.0 00:02:50.252 [518/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:50.252 [519/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:50.252 [520/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:50.252 [521/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:50.511 [522/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:50.511 [523/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:50.511 [524/710] Linking static target drivers/librte_bus_pci.a 00:02:50.511 [525/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:50.511 [526/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:50.511 [527/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:50.511 [528/710] Linking static target drivers/librte_bus_vdev.a 00:02:50.770 [529/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:50.770 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:50.770 [531/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:50.770 [532/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:51.029 [533/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.029 [534/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:51.029 [535/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:51.029 [536/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:51.029 [537/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.029 [538/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:51.029 [539/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:51.029 [540/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:51.029 [541/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.029 [542/710] Linking static target drivers/librte_mempool_ring.a 00:02:51.029 [543/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.029 [544/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:51.287 [545/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:51.287 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:51.546 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:51.804 [548/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:51.805 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:52.064 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:52.064 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:52.632 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:52.891 [553/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:52.891 [554/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:52.891 [555/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:52.891 [556/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:52.891 [557/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:53.149 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:53.408 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:53.408 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:53.667 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:53.667 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:54.236 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:54.236 [564/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:54.236 [565/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:54.236 [566/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:54.804 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:54.805 [568/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:54.805 [569/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:54.805 [570/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:54.805 [571/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:54.805 [572/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:54.805 [573/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:55.386 [574/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:55.386 [575/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:55.386 [576/710] Linking static target lib/librte_vhost.a 00:02:55.386 [577/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:55.386 [578/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:55.386 [579/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:55.386 [580/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:55.386 [581/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:55.696 [582/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:55.696 [583/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:55.696 [584/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:55.696 [585/710] Linking static target drivers/librte_net_i40e.a 00:02:55.957 [586/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:55.957 [587/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:55.957 [588/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:55.957 [589/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:55.957 [590/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:56.216 [591/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:56.216 [592/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:56.476 [593/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.476 [594/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.476 [595/710] Linking target lib/librte_vhost.so.24.0 00:02:56.476 [596/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:56.476 [597/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:56.735 [598/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:56.735 [599/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:56.995 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:56.995 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:57.254 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:57.254 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:57.254 [604/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:57.254 [605/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:57.513 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:57.513 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:58.081 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:58.081 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:58.081 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:58.081 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:58.081 [612/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:58.081 [613/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:58.081 [614/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:58.340 [615/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:58.340 [616/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:58.340 [617/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:58.599 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:58.858 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:58.858 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:58.858 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:59.117 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:59.117 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:59.685 [624/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:59.945 [625/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:59.945 [626/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:59.945 [627/710] Linking static target lib/librte_pipeline.a 00:02:59.945 [628/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:59.945 [629/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:00.203 [630/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:00.203 [631/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:00.203 [632/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:00.463 [633/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:00.463 [634/710] Linking target app/dpdk-dumpcap 00:03:00.463 [635/710] Linking target app/dpdk-graph 00:03:00.463 [636/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:00.722 [637/710] Linking target app/dpdk-pdump 00:03:00.722 [638/710] Linking target app/dpdk-proc-info 00:03:00.722 [639/710] Linking target app/dpdk-test-acl 00:03:00.722 [640/710] Linking target app/dpdk-test-cmdline 00:03:00.981 [641/710] Linking target app/dpdk-test-compress-perf 00:03:00.981 [642/710] Linking target app/dpdk-test-crypto-perf 00:03:00.981 [643/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:00.981 [644/710] Linking target app/dpdk-test-dma-perf 00:03:00.981 [645/710] Linking target app/dpdk-test-fib 00:03:00.981 [646/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:01.239 [647/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:01.239 [648/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:01.239 [649/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:01.497 [650/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:01.497 [651/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:01.497 [652/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:01.754 [653/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:01.754 [654/710] Linking target app/dpdk-test-eventdev 00:03:01.754 [655/710] Linking target app/dpdk-test-gpudev 00:03:02.059 [656/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:02.059 [657/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:02.059 [658/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:02.059 [659/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:02.059 [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:02.059 [661/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:02.318 [662/710] Linking target app/dpdk-test-flow-perf 00:03:02.318 [663/710] Linking target app/dpdk-test-bbdev 00:03:02.318 [664/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:02.318 [665/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:02.576 [666/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.576 [667/710] Linking target lib/librte_pipeline.so.24.0 00:03:02.576 [668/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:02.834 [669/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:02.834 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:02.834 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:02.834 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:03.092 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:03.092 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:03.350 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:03.350 [676/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:03.350 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:03.610 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:03.610 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:03.610 [680/710] Linking target app/dpdk-test-pipeline 00:03:03.869 [681/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:03.869 [682/710] Linking target app/dpdk-test-mldev 00:03:03.869 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:04.436 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:04.436 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:04.436 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:04.436 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:04.696 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:04.696 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:04.955 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:04.955 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:04.955 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:05.215 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:05.474 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:05.734 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:05.734 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:05.994 [697/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:05.994 [698/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:05.994 [699/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:06.253 [700/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:06.253 [701/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:06.253 [702/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:06.253 [703/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:06.512 [704/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:06.512 [705/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:06.512 [706/710] Linking target app/dpdk-test-regex 00:03:06.512 [707/710] Linking target app/dpdk-test-sad 00:03:07.080 [708/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:07.080 [709/710] Linking target app/dpdk-testpmd 00:03:07.338 [710/710] Linking target app/dpdk-test-security-perf 00:03:07.338 03:59:08 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:07.338 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:07.338 [0/1] Installing files. 00:03:07.598 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:07.599 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:07.600 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:07.859 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:07.859 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:07.859 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:07.859 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:07.859 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:07.859 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:07.859 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:07.859 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:07.859 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:07.859 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:07.859 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:07.859 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:07.859 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:07.859 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:07.859 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:07.859 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:07.859 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:07.859 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:07.859 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:07.860 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:07.860 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.860 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:07.861 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.123 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.123 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.123 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.123 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:08.123 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.123 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:08.123 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.123 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:08.123 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.123 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:08.123 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.123 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.123 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.123 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.123 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.123 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.123 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.123 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.123 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.123 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.123 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.123 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.123 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.123 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.123 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.123 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.123 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.123 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.123 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.123 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.123 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.124 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.125 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:08.126 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:08.126 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:08.126 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:08.126 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:08.126 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:08.126 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:08.126 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:08.126 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:08.126 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:08.126 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:08.126 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:08.126 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:08.126 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:08.126 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:08.126 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:08.126 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:08.126 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:08.126 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:08.126 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:08.126 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:08.126 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:08.126 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:08.126 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:08.126 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:08.126 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:08.126 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:08.126 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:08.126 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:08.126 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:08.126 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:08.126 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:08.126 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:08.126 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:08.126 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:08.126 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:08.126 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:08.126 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:08.126 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:08.126 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:08.126 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:08.126 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:08.126 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:08.126 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:08.126 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:08.126 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:08.126 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:08.126 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:08.126 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:08.126 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:08.126 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:08.126 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:08.126 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:08.126 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:08.126 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:08.126 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:08.126 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:08.126 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:08.126 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:08.126 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:08.126 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:08.126 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:08.126 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:08.126 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:08.126 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:08.126 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:08.126 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:08.126 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:08.126 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:08.126 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:08.126 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:08.127 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:08.127 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:08.127 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:08.127 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:08.127 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:08.127 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:08.127 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:08.127 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:08.127 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:08.127 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:08.127 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:08.127 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:08.127 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:08.127 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:08.127 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:08.127 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:08.127 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:08.127 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:08.127 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:08.127 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:08.127 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:08.127 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:08.127 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:08.127 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:08.127 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:08.127 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:08.127 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:08.127 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:08.127 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:08.127 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:08.127 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:08.127 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:08.127 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:08.387 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:08.387 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:08.387 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:08.387 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:08.387 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:08.387 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:08.387 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:08.387 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:08.387 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:08.387 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:08.387 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:08.387 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:08.387 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:08.387 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:08.387 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:08.387 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:08.387 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:08.387 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:08.387 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:08.387 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:08.387 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:08.387 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:08.387 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:08.387 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:08.387 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:08.387 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:08.387 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:08.387 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:08.387 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:08.387 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:08.387 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:08.387 03:59:09 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:08.387 03:59:09 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:08.387 03:59:09 -- common/autobuild_common.sh@203 -- $ cat 00:03:08.387 03:59:09 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:08.387 00:03:08.387 real 0m55.939s 00:03:08.387 user 6m35.673s 00:03:08.387 sys 1m7.607s 00:03:08.387 ************************************ 00:03:08.387 END TEST build_native_dpdk 00:03:08.387 ************************************ 00:03:08.387 03:59:09 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:08.387 03:59:09 -- common/autotest_common.sh@10 -- $ set +x 00:03:08.387 03:59:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:08.387 03:59:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:08.387 03:59:09 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:08.387 03:59:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:08.387 03:59:09 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:08.387 03:59:09 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:08.387 03:59:09 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:08.387 03:59:09 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:03:08.387 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:08.646 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.646 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:08.646 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:08.905 Using 'verbs' RDMA provider 00:03:24.355 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:36.563 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:36.563 go version go1.21.1 linux/amd64 00:03:37.131 Creating mk/config.mk...done. 00:03:37.131 Creating mk/cc.flags.mk...done. 00:03:37.131 Type 'make' to build. 00:03:37.131 03:59:38 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:37.131 03:59:38 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:37.131 03:59:38 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:37.131 03:59:38 -- common/autotest_common.sh@10 -- $ set +x 00:03:37.131 ************************************ 00:03:37.131 START TEST make 00:03:37.131 ************************************ 00:03:37.131 03:59:38 -- common/autotest_common.sh@1114 -- $ make -j10 00:03:37.390 make[1]: Nothing to be done for 'all'. 00:03:59.351 CC lib/ut/ut.o 00:03:59.351 CC lib/log/log.o 00:03:59.351 CC lib/log/log_deprecated.o 00:03:59.351 CC lib/log/log_flags.o 00:03:59.351 CC lib/ut_mock/mock.o 00:03:59.351 LIB libspdk_ut_mock.a 00:03:59.351 LIB libspdk_ut.a 00:03:59.351 SO libspdk_ut_mock.so.5.0 00:03:59.351 LIB libspdk_log.a 00:03:59.351 SO libspdk_ut.so.1.0 00:03:59.351 SO libspdk_log.so.6.1 00:03:59.351 SYMLINK libspdk_ut_mock.so 00:03:59.351 SYMLINK libspdk_ut.so 00:03:59.351 SYMLINK libspdk_log.so 00:03:59.351 CC lib/util/bit_array.o 00:03:59.351 CC lib/util/base64.o 00:03:59.351 CC lib/util/cpuset.o 00:03:59.351 CC lib/util/crc16.o 00:03:59.351 CC lib/util/crc32.o 00:03:59.351 CC lib/util/crc32c.o 00:03:59.351 CC lib/ioat/ioat.o 00:03:59.351 CC lib/dma/dma.o 00:03:59.351 CXX lib/trace_parser/trace.o 00:03:59.351 CC lib/vfio_user/host/vfio_user_pci.o 00:03:59.351 CC lib/util/crc32_ieee.o 00:03:59.351 CC lib/util/crc64.o 00:03:59.351 CC lib/util/dif.o 00:03:59.351 CC lib/util/fd.o 00:03:59.351 LIB libspdk_dma.a 00:03:59.351 LIB libspdk_ioat.a 00:03:59.351 CC lib/vfio_user/host/vfio_user.o 00:03:59.351 CC lib/util/file.o 00:03:59.351 SO libspdk_dma.so.3.0 00:03:59.351 CC lib/util/hexlify.o 00:03:59.351 SO libspdk_ioat.so.6.0 00:03:59.351 CC lib/util/iov.o 00:03:59.351 SYMLINK libspdk_ioat.so 00:03:59.351 SYMLINK libspdk_dma.so 00:03:59.351 CC lib/util/math.o 00:03:59.351 CC lib/util/pipe.o 00:03:59.351 CC lib/util/strerror_tls.o 00:03:59.351 CC lib/util/string.o 00:03:59.351 CC lib/util/uuid.o 00:03:59.351 CC lib/util/fd_group.o 00:03:59.351 LIB libspdk_vfio_user.a 00:03:59.351 SO libspdk_vfio_user.so.4.0 00:03:59.351 CC lib/util/xor.o 00:03:59.351 CC lib/util/zipf.o 00:03:59.351 SYMLINK libspdk_vfio_user.so 00:03:59.610 LIB libspdk_util.a 00:03:59.610 SO libspdk_util.so.8.0 00:03:59.868 LIB libspdk_trace_parser.a 00:03:59.868 SYMLINK libspdk_util.so 00:03:59.868 SO libspdk_trace_parser.so.4.0 00:03:59.868 CC lib/idxd/idxd.o 00:03:59.868 CC lib/idxd/idxd_user.o 00:03:59.868 CC lib/idxd/idxd_kernel.o 00:03:59.868 CC lib/json/json_parse.o 00:03:59.868 CC lib/rdma/rdma_verbs.o 00:03:59.868 CC lib/rdma/common.o 00:03:59.868 SYMLINK libspdk_trace_parser.so 00:03:59.868 CC lib/conf/conf.o 00:03:59.868 CC lib/vmd/vmd.o 00:03:59.868 CC lib/env_dpdk/env.o 00:03:59.868 CC lib/vmd/led.o 00:04:00.127 CC lib/env_dpdk/memory.o 00:04:00.127 CC lib/env_dpdk/pci.o 00:04:00.127 CC lib/env_dpdk/init.o 00:04:00.127 LIB libspdk_conf.a 00:04:00.127 CC lib/env_dpdk/threads.o 00:04:00.127 CC lib/json/json_util.o 00:04:00.127 SO libspdk_conf.so.5.0 00:04:00.127 LIB libspdk_rdma.a 00:04:00.127 SYMLINK libspdk_conf.so 00:04:00.127 SO libspdk_rdma.so.5.0 00:04:00.387 CC lib/env_dpdk/pci_ioat.o 00:04:00.387 SYMLINK libspdk_rdma.so 00:04:00.387 CC lib/env_dpdk/pci_virtio.o 00:04:00.387 CC lib/env_dpdk/pci_vmd.o 00:04:00.387 CC lib/env_dpdk/pci_idxd.o 00:04:00.387 CC lib/json/json_write.o 00:04:00.387 CC lib/env_dpdk/pci_event.o 00:04:00.387 CC lib/env_dpdk/sigbus_handler.o 00:04:00.387 CC lib/env_dpdk/pci_dpdk.o 00:04:00.387 LIB libspdk_idxd.a 00:04:00.387 SO libspdk_idxd.so.11.0 00:04:00.387 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:00.646 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:00.646 SYMLINK libspdk_idxd.so 00:04:00.646 LIB libspdk_vmd.a 00:04:00.646 SO libspdk_vmd.so.5.0 00:04:00.646 SYMLINK libspdk_vmd.so 00:04:00.646 LIB libspdk_json.a 00:04:00.646 SO libspdk_json.so.5.1 00:04:00.905 SYMLINK libspdk_json.so 00:04:00.905 CC lib/jsonrpc/jsonrpc_server.o 00:04:00.905 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:00.905 CC lib/jsonrpc/jsonrpc_client.o 00:04:00.905 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:01.164 LIB libspdk_env_dpdk.a 00:04:01.164 LIB libspdk_jsonrpc.a 00:04:01.164 SO libspdk_jsonrpc.so.5.1 00:04:01.423 SO libspdk_env_dpdk.so.13.0 00:04:01.423 SYMLINK libspdk_jsonrpc.so 00:04:01.423 CC lib/rpc/rpc.o 00:04:01.423 SYMLINK libspdk_env_dpdk.so 00:04:01.682 LIB libspdk_rpc.a 00:04:01.682 SO libspdk_rpc.so.5.0 00:04:01.682 SYMLINK libspdk_rpc.so 00:04:01.941 CC lib/notify/notify.o 00:04:01.941 CC lib/notify/notify_rpc.o 00:04:01.941 CC lib/sock/sock_rpc.o 00:04:01.941 CC lib/sock/sock.o 00:04:01.941 CC lib/trace/trace_flags.o 00:04:01.941 CC lib/trace/trace.o 00:04:01.941 CC lib/trace/trace_rpc.o 00:04:01.941 LIB libspdk_notify.a 00:04:01.941 SO libspdk_notify.so.5.0 00:04:02.199 LIB libspdk_trace.a 00:04:02.199 SYMLINK libspdk_notify.so 00:04:02.199 SO libspdk_trace.so.9.0 00:04:02.199 LIB libspdk_sock.a 00:04:02.199 SYMLINK libspdk_trace.so 00:04:02.199 SO libspdk_sock.so.8.0 00:04:02.199 SYMLINK libspdk_sock.so 00:04:02.458 CC lib/thread/iobuf.o 00:04:02.458 CC lib/thread/thread.o 00:04:02.458 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:02.458 CC lib/nvme/nvme_ctrlr.o 00:04:02.458 CC lib/nvme/nvme_fabric.o 00:04:02.458 CC lib/nvme/nvme_ns_cmd.o 00:04:02.458 CC lib/nvme/nvme_ns.o 00:04:02.458 CC lib/nvme/nvme_pcie_common.o 00:04:02.458 CC lib/nvme/nvme_pcie.o 00:04:02.458 CC lib/nvme/nvme_qpair.o 00:04:02.717 CC lib/nvme/nvme.o 00:04:03.286 CC lib/nvme/nvme_quirks.o 00:04:03.286 CC lib/nvme/nvme_transport.o 00:04:03.286 CC lib/nvme/nvme_discovery.o 00:04:03.286 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:03.286 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:03.286 CC lib/nvme/nvme_tcp.o 00:04:03.544 CC lib/nvme/nvme_opal.o 00:04:03.544 CC lib/nvme/nvme_io_msg.o 00:04:03.803 LIB libspdk_thread.a 00:04:03.803 SO libspdk_thread.so.9.0 00:04:03.803 CC lib/nvme/nvme_poll_group.o 00:04:03.803 SYMLINK libspdk_thread.so 00:04:03.803 CC lib/nvme/nvme_zns.o 00:04:03.803 CC lib/accel/accel.o 00:04:03.803 CC lib/accel/accel_rpc.o 00:04:03.803 CC lib/nvme/nvme_cuse.o 00:04:04.062 CC lib/accel/accel_sw.o 00:04:04.062 CC lib/blob/blobstore.o 00:04:04.062 CC lib/init/json_config.o 00:04:04.062 CC lib/blob/request.o 00:04:04.430 CC lib/nvme/nvme_vfio_user.o 00:04:04.430 CC lib/nvme/nvme_rdma.o 00:04:04.430 CC lib/init/subsystem.o 00:04:04.430 CC lib/blob/zeroes.o 00:04:04.430 CC lib/blob/blob_bs_dev.o 00:04:04.430 CC lib/virtio/virtio.o 00:04:04.430 CC lib/init/subsystem_rpc.o 00:04:04.689 CC lib/init/rpc.o 00:04:04.689 CC lib/virtio/virtio_vhost_user.o 00:04:04.689 CC lib/virtio/virtio_vfio_user.o 00:04:04.689 CC lib/virtio/virtio_pci.o 00:04:04.689 LIB libspdk_init.a 00:04:04.689 SO libspdk_init.so.4.0 00:04:04.689 LIB libspdk_accel.a 00:04:04.948 SYMLINK libspdk_init.so 00:04:04.948 SO libspdk_accel.so.14.0 00:04:04.948 SYMLINK libspdk_accel.so 00:04:04.948 CC lib/event/app.o 00:04:04.948 CC lib/event/reactor.o 00:04:04.948 CC lib/event/app_rpc.o 00:04:04.948 CC lib/event/log_rpc.o 00:04:04.948 CC lib/event/scheduler_static.o 00:04:04.948 LIB libspdk_virtio.a 00:04:04.948 CC lib/bdev/bdev.o 00:04:04.948 CC lib/bdev/bdev_rpc.o 00:04:04.948 SO libspdk_virtio.so.6.0 00:04:05.206 CC lib/bdev/bdev_zone.o 00:04:05.206 CC lib/bdev/part.o 00:04:05.206 SYMLINK libspdk_virtio.so 00:04:05.206 CC lib/bdev/scsi_nvme.o 00:04:05.465 LIB libspdk_event.a 00:04:05.465 SO libspdk_event.so.12.0 00:04:05.465 SYMLINK libspdk_event.so 00:04:05.465 LIB libspdk_nvme.a 00:04:05.724 SO libspdk_nvme.so.12.0 00:04:05.983 SYMLINK libspdk_nvme.so 00:04:06.551 LIB libspdk_blob.a 00:04:06.551 SO libspdk_blob.so.10.1 00:04:06.551 SYMLINK libspdk_blob.so 00:04:06.810 CC lib/blobfs/blobfs.o 00:04:06.810 CC lib/blobfs/tree.o 00:04:06.810 CC lib/lvol/lvol.o 00:04:07.378 LIB libspdk_bdev.a 00:04:07.378 SO libspdk_bdev.so.14.0 00:04:07.378 LIB libspdk_lvol.a 00:04:07.639 SYMLINK libspdk_bdev.so 00:04:07.639 SO libspdk_lvol.so.9.1 00:04:07.639 SYMLINK libspdk_lvol.so 00:04:07.639 LIB libspdk_blobfs.a 00:04:07.639 CC lib/ublk/ublk.o 00:04:07.639 CC lib/ublk/ublk_rpc.o 00:04:07.639 CC lib/nbd/nbd.o 00:04:07.639 CC lib/nbd/nbd_rpc.o 00:04:07.639 CC lib/ftl/ftl_core.o 00:04:07.639 CC lib/ftl/ftl_init.o 00:04:07.639 CC lib/ftl/ftl_layout.o 00:04:07.639 CC lib/scsi/dev.o 00:04:07.639 CC lib/nvmf/ctrlr.o 00:04:07.639 SO libspdk_blobfs.so.9.0 00:04:07.639 SYMLINK libspdk_blobfs.so 00:04:07.639 CC lib/scsi/lun.o 00:04:07.900 CC lib/scsi/port.o 00:04:07.900 CC lib/scsi/scsi.o 00:04:07.900 CC lib/ftl/ftl_debug.o 00:04:07.900 CC lib/scsi/scsi_bdev.o 00:04:07.900 CC lib/nvmf/ctrlr_discovery.o 00:04:07.900 CC lib/nvmf/ctrlr_bdev.o 00:04:07.900 CC lib/nvmf/subsystem.o 00:04:07.900 CC lib/nvmf/nvmf.o 00:04:07.900 CC lib/nvmf/nvmf_rpc.o 00:04:08.158 LIB libspdk_nbd.a 00:04:08.158 SO libspdk_nbd.so.6.0 00:04:08.158 CC lib/ftl/ftl_io.o 00:04:08.158 SYMLINK libspdk_nbd.so 00:04:08.158 CC lib/ftl/ftl_sb.o 00:04:08.158 LIB libspdk_ublk.a 00:04:08.417 SO libspdk_ublk.so.2.0 00:04:08.417 CC lib/scsi/scsi_pr.o 00:04:08.417 CC lib/scsi/scsi_rpc.o 00:04:08.417 SYMLINK libspdk_ublk.so 00:04:08.417 CC lib/scsi/task.o 00:04:08.417 CC lib/ftl/ftl_l2p.o 00:04:08.417 CC lib/ftl/ftl_l2p_flat.o 00:04:08.417 CC lib/nvmf/transport.o 00:04:08.676 CC lib/nvmf/tcp.o 00:04:08.676 CC lib/nvmf/rdma.o 00:04:08.676 CC lib/ftl/ftl_nv_cache.o 00:04:08.676 CC lib/ftl/ftl_band.o 00:04:08.676 LIB libspdk_scsi.a 00:04:08.676 SO libspdk_scsi.so.8.0 00:04:08.676 CC lib/ftl/ftl_band_ops.o 00:04:08.676 SYMLINK libspdk_scsi.so 00:04:08.676 CC lib/ftl/ftl_writer.o 00:04:08.935 CC lib/ftl/ftl_rq.o 00:04:08.935 CC lib/ftl/ftl_reloc.o 00:04:08.935 CC lib/ftl/ftl_l2p_cache.o 00:04:08.935 CC lib/ftl/ftl_p2l.o 00:04:08.935 CC lib/ftl/mngt/ftl_mngt.o 00:04:08.935 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:08.935 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:09.194 CC lib/iscsi/conn.o 00:04:09.194 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:09.194 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:09.194 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:09.194 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:09.454 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:09.454 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:09.454 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:09.454 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:09.454 CC lib/iscsi/init_grp.o 00:04:09.454 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:09.454 CC lib/iscsi/iscsi.o 00:04:09.454 CC lib/vhost/vhost.o 00:04:09.713 CC lib/iscsi/md5.o 00:04:09.713 CC lib/iscsi/param.o 00:04:09.713 CC lib/iscsi/portal_grp.o 00:04:09.713 CC lib/vhost/vhost_rpc.o 00:04:09.713 CC lib/iscsi/tgt_node.o 00:04:09.713 CC lib/iscsi/iscsi_subsystem.o 00:04:09.970 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:09.970 CC lib/ftl/utils/ftl_conf.o 00:04:09.970 CC lib/ftl/utils/ftl_md.o 00:04:09.970 CC lib/iscsi/iscsi_rpc.o 00:04:09.970 CC lib/iscsi/task.o 00:04:09.970 CC lib/ftl/utils/ftl_mempool.o 00:04:10.227 CC lib/ftl/utils/ftl_bitmap.o 00:04:10.227 CC lib/ftl/utils/ftl_property.o 00:04:10.227 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:10.227 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:10.227 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:10.227 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:10.227 CC lib/vhost/vhost_scsi.o 00:04:10.227 CC lib/vhost/vhost_blk.o 00:04:10.486 CC lib/vhost/rte_vhost_user.o 00:04:10.486 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:10.486 LIB libspdk_nvmf.a 00:04:10.486 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:10.486 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:10.486 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:10.486 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:10.486 SO libspdk_nvmf.so.17.0 00:04:10.486 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:10.744 CC lib/ftl/base/ftl_base_dev.o 00:04:10.744 CC lib/ftl/base/ftl_base_bdev.o 00:04:10.744 CC lib/ftl/ftl_trace.o 00:04:10.744 SYMLINK libspdk_nvmf.so 00:04:10.744 LIB libspdk_iscsi.a 00:04:10.745 SO libspdk_iscsi.so.7.0 00:04:11.003 LIB libspdk_ftl.a 00:04:11.003 SYMLINK libspdk_iscsi.so 00:04:11.003 SO libspdk_ftl.so.8.0 00:04:11.262 SYMLINK libspdk_ftl.so 00:04:11.262 LIB libspdk_vhost.a 00:04:11.520 SO libspdk_vhost.so.7.1 00:04:11.520 SYMLINK libspdk_vhost.so 00:04:11.779 CC module/env_dpdk/env_dpdk_rpc.o 00:04:11.779 CC module/accel/iaa/accel_iaa.o 00:04:11.779 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:11.779 CC module/accel/dsa/accel_dsa.o 00:04:11.779 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:11.779 CC module/accel/error/accel_error.o 00:04:11.779 CC module/accel/ioat/accel_ioat.o 00:04:11.779 CC module/scheduler/gscheduler/gscheduler.o 00:04:11.779 CC module/blob/bdev/blob_bdev.o 00:04:11.779 CC module/sock/posix/posix.o 00:04:11.779 LIB libspdk_env_dpdk_rpc.a 00:04:11.779 SO libspdk_env_dpdk_rpc.so.5.0 00:04:11.779 LIB libspdk_scheduler_dpdk_governor.a 00:04:12.038 LIB libspdk_scheduler_gscheduler.a 00:04:12.038 SYMLINK libspdk_env_dpdk_rpc.so 00:04:12.038 CC module/accel/error/accel_error_rpc.o 00:04:12.038 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:12.038 SO libspdk_scheduler_gscheduler.so.3.0 00:04:12.038 CC module/accel/ioat/accel_ioat_rpc.o 00:04:12.038 CC module/accel/dsa/accel_dsa_rpc.o 00:04:12.038 CC module/accel/iaa/accel_iaa_rpc.o 00:04:12.038 LIB libspdk_scheduler_dynamic.a 00:04:12.038 SYMLINK libspdk_scheduler_gscheduler.so 00:04:12.038 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:12.038 SO libspdk_scheduler_dynamic.so.3.0 00:04:12.038 LIB libspdk_blob_bdev.a 00:04:12.038 SYMLINK libspdk_scheduler_dynamic.so 00:04:12.038 SO libspdk_blob_bdev.so.10.1 00:04:12.038 LIB libspdk_accel_error.a 00:04:12.038 LIB libspdk_accel_ioat.a 00:04:12.038 LIB libspdk_accel_dsa.a 00:04:12.038 LIB libspdk_accel_iaa.a 00:04:12.038 SO libspdk_accel_error.so.1.0 00:04:12.038 SO libspdk_accel_ioat.so.5.0 00:04:12.038 SYMLINK libspdk_blob_bdev.so 00:04:12.038 SO libspdk_accel_dsa.so.4.0 00:04:12.038 SO libspdk_accel_iaa.so.2.0 00:04:12.038 SYMLINK libspdk_accel_error.so 00:04:12.038 SYMLINK libspdk_accel_ioat.so 00:04:12.038 SYMLINK libspdk_accel_iaa.so 00:04:12.296 SYMLINK libspdk_accel_dsa.so 00:04:12.296 CC module/bdev/passthru/vbdev_passthru.o 00:04:12.296 CC module/bdev/null/bdev_null.o 00:04:12.296 CC module/bdev/gpt/gpt.o 00:04:12.296 CC module/bdev/error/vbdev_error.o 00:04:12.296 CC module/bdev/nvme/bdev_nvme.o 00:04:12.296 CC module/blobfs/bdev/blobfs_bdev.o 00:04:12.296 CC module/bdev/malloc/bdev_malloc.o 00:04:12.296 CC module/bdev/delay/vbdev_delay.o 00:04:12.296 CC module/bdev/lvol/vbdev_lvol.o 00:04:12.296 LIB libspdk_sock_posix.a 00:04:12.555 SO libspdk_sock_posix.so.5.0 00:04:12.555 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:12.555 CC module/bdev/gpt/vbdev_gpt.o 00:04:12.555 CC module/bdev/null/bdev_null_rpc.o 00:04:12.555 SYMLINK libspdk_sock_posix.so 00:04:12.555 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:12.555 CC module/bdev/error/vbdev_error_rpc.o 00:04:12.555 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:12.555 LIB libspdk_blobfs_bdev.a 00:04:12.555 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:12.555 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:12.555 SO libspdk_blobfs_bdev.so.5.0 00:04:12.555 LIB libspdk_bdev_null.a 00:04:12.555 LIB libspdk_bdev_error.a 00:04:12.814 LIB libspdk_bdev_delay.a 00:04:12.814 SYMLINK libspdk_blobfs_bdev.so 00:04:12.814 SO libspdk_bdev_null.so.5.0 00:04:12.814 LIB libspdk_bdev_gpt.a 00:04:12.814 SO libspdk_bdev_error.so.5.0 00:04:12.814 SO libspdk_bdev_delay.so.5.0 00:04:12.814 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:12.814 SO libspdk_bdev_gpt.so.5.0 00:04:12.814 LIB libspdk_bdev_passthru.a 00:04:12.814 CC module/bdev/nvme/nvme_rpc.o 00:04:12.814 SYMLINK libspdk_bdev_null.so 00:04:12.814 LIB libspdk_bdev_malloc.a 00:04:12.814 SYMLINK libspdk_bdev_delay.so 00:04:12.814 SO libspdk_bdev_passthru.so.5.0 00:04:12.814 SYMLINK libspdk_bdev_error.so 00:04:12.814 SYMLINK libspdk_bdev_gpt.so 00:04:12.814 SO libspdk_bdev_malloc.so.5.0 00:04:12.814 CC module/bdev/nvme/bdev_mdns_client.o 00:04:12.814 SYMLINK libspdk_bdev_malloc.so 00:04:12.814 CC module/bdev/nvme/vbdev_opal.o 00:04:12.814 SYMLINK libspdk_bdev_passthru.so 00:04:12.814 CC module/bdev/raid/bdev_raid.o 00:04:12.814 CC module/bdev/split/vbdev_split.o 00:04:12.814 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:13.073 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:13.073 CC module/bdev/aio/bdev_aio.o 00:04:13.073 LIB libspdk_bdev_lvol.a 00:04:13.073 SO libspdk_bdev_lvol.so.5.0 00:04:13.073 CC module/bdev/raid/bdev_raid_rpc.o 00:04:13.073 SYMLINK libspdk_bdev_lvol.so 00:04:13.073 CC module/bdev/raid/bdev_raid_sb.o 00:04:13.073 CC module/bdev/split/vbdev_split_rpc.o 00:04:13.073 CC module/bdev/raid/raid0.o 00:04:13.073 CC module/bdev/raid/raid1.o 00:04:13.073 CC module/bdev/raid/concat.o 00:04:13.332 LIB libspdk_bdev_zone_block.a 00:04:13.332 SO libspdk_bdev_zone_block.so.5.0 00:04:13.332 LIB libspdk_bdev_split.a 00:04:13.332 CC module/bdev/aio/bdev_aio_rpc.o 00:04:13.332 SO libspdk_bdev_split.so.5.0 00:04:13.332 SYMLINK libspdk_bdev_zone_block.so 00:04:13.332 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:13.332 SYMLINK libspdk_bdev_split.so 00:04:13.332 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:13.332 CC module/bdev/ftl/bdev_ftl.o 00:04:13.332 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:13.332 CC module/bdev/iscsi/bdev_iscsi.o 00:04:13.332 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:13.591 LIB libspdk_bdev_aio.a 00:04:13.591 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:13.591 SO libspdk_bdev_aio.so.5.0 00:04:13.591 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:13.591 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:13.591 SYMLINK libspdk_bdev_aio.so 00:04:13.591 LIB libspdk_bdev_ftl.a 00:04:13.591 LIB libspdk_bdev_raid.a 00:04:13.849 SO libspdk_bdev_ftl.so.5.0 00:04:13.849 LIB libspdk_bdev_iscsi.a 00:04:13.849 SO libspdk_bdev_raid.so.5.0 00:04:13.849 SO libspdk_bdev_iscsi.so.5.0 00:04:13.849 SYMLINK libspdk_bdev_ftl.so 00:04:13.849 SYMLINK libspdk_bdev_raid.so 00:04:13.849 SYMLINK libspdk_bdev_iscsi.so 00:04:13.849 LIB libspdk_bdev_virtio.a 00:04:14.106 SO libspdk_bdev_virtio.so.5.0 00:04:14.106 SYMLINK libspdk_bdev_virtio.so 00:04:14.106 LIB libspdk_bdev_nvme.a 00:04:14.364 SO libspdk_bdev_nvme.so.6.0 00:04:14.364 SYMLINK libspdk_bdev_nvme.so 00:04:14.623 CC module/event/subsystems/sock/sock.o 00:04:14.623 CC module/event/subsystems/vmd/vmd.o 00:04:14.623 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:14.623 CC module/event/subsystems/scheduler/scheduler.o 00:04:14.623 CC module/event/subsystems/iobuf/iobuf.o 00:04:14.623 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:14.623 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:14.882 LIB libspdk_event_vhost_blk.a 00:04:14.882 LIB libspdk_event_sock.a 00:04:14.882 LIB libspdk_event_vmd.a 00:04:14.882 LIB libspdk_event_scheduler.a 00:04:14.882 LIB libspdk_event_iobuf.a 00:04:14.882 SO libspdk_event_vhost_blk.so.2.0 00:04:14.882 SO libspdk_event_sock.so.4.0 00:04:14.882 SO libspdk_event_scheduler.so.3.0 00:04:14.882 SO libspdk_event_vmd.so.5.0 00:04:14.882 SO libspdk_event_iobuf.so.2.0 00:04:14.882 SYMLINK libspdk_event_vhost_blk.so 00:04:14.882 SYMLINK libspdk_event_scheduler.so 00:04:14.882 SYMLINK libspdk_event_sock.so 00:04:14.882 SYMLINK libspdk_event_vmd.so 00:04:14.882 SYMLINK libspdk_event_iobuf.so 00:04:15.141 CC module/event/subsystems/accel/accel.o 00:04:15.141 LIB libspdk_event_accel.a 00:04:15.141 SO libspdk_event_accel.so.5.0 00:04:15.141 SYMLINK libspdk_event_accel.so 00:04:15.399 CC module/event/subsystems/bdev/bdev.o 00:04:15.658 LIB libspdk_event_bdev.a 00:04:15.658 SO libspdk_event_bdev.so.5.0 00:04:15.658 SYMLINK libspdk_event_bdev.so 00:04:15.916 CC module/event/subsystems/nbd/nbd.o 00:04:15.916 CC module/event/subsystems/scsi/scsi.o 00:04:15.916 CC module/event/subsystems/ublk/ublk.o 00:04:15.916 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:15.916 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:16.174 LIB libspdk_event_nbd.a 00:04:16.174 LIB libspdk_event_ublk.a 00:04:16.174 LIB libspdk_event_scsi.a 00:04:16.174 SO libspdk_event_nbd.so.5.0 00:04:16.174 SO libspdk_event_ublk.so.2.0 00:04:16.174 SO libspdk_event_scsi.so.5.0 00:04:16.174 SYMLINK libspdk_event_ublk.so 00:04:16.174 SYMLINK libspdk_event_nbd.so 00:04:16.174 SYMLINK libspdk_event_scsi.so 00:04:16.174 LIB libspdk_event_nvmf.a 00:04:16.174 SO libspdk_event_nvmf.so.5.0 00:04:16.174 SYMLINK libspdk_event_nvmf.so 00:04:16.433 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:16.433 CC module/event/subsystems/iscsi/iscsi.o 00:04:16.433 LIB libspdk_event_vhost_scsi.a 00:04:16.433 LIB libspdk_event_iscsi.a 00:04:16.433 SO libspdk_event_vhost_scsi.so.2.0 00:04:16.692 SO libspdk_event_iscsi.so.5.0 00:04:16.692 SYMLINK libspdk_event_vhost_scsi.so 00:04:16.692 SYMLINK libspdk_event_iscsi.so 00:04:16.692 SO libspdk.so.5.0 00:04:16.692 SYMLINK libspdk.so 00:04:16.951 CXX app/trace/trace.o 00:04:16.951 CC examples/ioat/perf/perf.o 00:04:16.951 CC examples/vmd/lsvmd/lsvmd.o 00:04:16.951 CC examples/nvme/hello_world/hello_world.o 00:04:16.951 CC examples/sock/hello_world/hello_sock.o 00:04:16.951 CC examples/accel/perf/accel_perf.o 00:04:16.951 CC examples/blob/hello_world/hello_blob.o 00:04:16.951 CC test/accel/dif/dif.o 00:04:16.951 CC examples/nvmf/nvmf/nvmf.o 00:04:16.951 CC examples/bdev/hello_world/hello_bdev.o 00:04:17.210 LINK lsvmd 00:04:17.210 LINK ioat_perf 00:04:17.210 LINK hello_world 00:04:17.210 LINK hello_sock 00:04:17.210 LINK hello_blob 00:04:17.469 LINK hello_bdev 00:04:17.469 CC examples/vmd/led/led.o 00:04:17.469 LINK nvmf 00:04:17.469 LINK spdk_trace 00:04:17.469 LINK dif 00:04:17.469 CC examples/ioat/verify/verify.o 00:04:17.469 CC examples/nvme/reconnect/reconnect.o 00:04:17.469 LINK accel_perf 00:04:17.469 LINK led 00:04:17.727 CC test/app/bdev_svc/bdev_svc.o 00:04:17.727 CC examples/blob/cli/blobcli.o 00:04:17.727 CC examples/bdev/bdevperf/bdevperf.o 00:04:17.727 CC app/trace_record/trace_record.o 00:04:17.727 LINK verify 00:04:17.727 CC examples/util/zipf/zipf.o 00:04:17.727 CC examples/thread/thread/thread_ex.o 00:04:17.727 CC examples/idxd/perf/perf.o 00:04:17.727 LINK bdev_svc 00:04:17.727 LINK reconnect 00:04:17.984 LINK zipf 00:04:17.985 LINK spdk_trace_record 00:04:17.985 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:17.985 LINK thread 00:04:17.985 CC test/app/histogram_perf/histogram_perf.o 00:04:17.985 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:17.985 LINK blobcli 00:04:17.985 LINK interrupt_tgt 00:04:17.985 LINK idxd_perf 00:04:17.985 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:17.985 CC app/nvmf_tgt/nvmf_main.o 00:04:18.244 LINK histogram_perf 00:04:18.244 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:18.244 CC examples/nvme/arbitration/arbitration.o 00:04:18.244 LINK nvmf_tgt 00:04:18.244 CC examples/nvme/hotplug/hotplug.o 00:04:18.244 LINK bdevperf 00:04:18.244 CC test/app/jsoncat/jsoncat.o 00:04:18.502 CC test/bdev/bdevio/bdevio.o 00:04:18.502 LINK jsoncat 00:04:18.502 LINK nvme_fuzz 00:04:18.502 LINK nvme_manage 00:04:18.502 LINK hotplug 00:04:18.502 CC app/iscsi_tgt/iscsi_tgt.o 00:04:18.502 CC test/blobfs/mkfs/mkfs.o 00:04:18.502 LINK arbitration 00:04:18.502 CC test/app/stub/stub.o 00:04:18.761 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:18.761 TEST_HEADER include/spdk/accel.h 00:04:18.761 TEST_HEADER include/spdk/accel_module.h 00:04:18.761 TEST_HEADER include/spdk/assert.h 00:04:18.761 TEST_HEADER include/spdk/barrier.h 00:04:18.761 TEST_HEADER include/spdk/base64.h 00:04:18.761 TEST_HEADER include/spdk/bdev.h 00:04:18.761 TEST_HEADER include/spdk/bdev_module.h 00:04:18.761 TEST_HEADER include/spdk/bdev_zone.h 00:04:18.761 TEST_HEADER include/spdk/bit_array.h 00:04:18.761 TEST_HEADER include/spdk/bit_pool.h 00:04:18.761 TEST_HEADER include/spdk/blob_bdev.h 00:04:18.761 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:18.761 TEST_HEADER include/spdk/blobfs.h 00:04:18.761 TEST_HEADER include/spdk/blob.h 00:04:18.761 TEST_HEADER include/spdk/conf.h 00:04:18.761 TEST_HEADER include/spdk/config.h 00:04:18.761 TEST_HEADER include/spdk/cpuset.h 00:04:18.761 TEST_HEADER include/spdk/crc16.h 00:04:18.761 TEST_HEADER include/spdk/crc32.h 00:04:18.761 TEST_HEADER include/spdk/crc64.h 00:04:18.761 TEST_HEADER include/spdk/dif.h 00:04:18.761 TEST_HEADER include/spdk/dma.h 00:04:18.761 CC examples/nvme/abort/abort.o 00:04:18.761 TEST_HEADER include/spdk/endian.h 00:04:18.761 TEST_HEADER include/spdk/env_dpdk.h 00:04:18.761 TEST_HEADER include/spdk/env.h 00:04:18.761 TEST_HEADER include/spdk/event.h 00:04:18.761 TEST_HEADER include/spdk/fd_group.h 00:04:18.761 TEST_HEADER include/spdk/fd.h 00:04:18.761 TEST_HEADER include/spdk/file.h 00:04:18.761 TEST_HEADER include/spdk/ftl.h 00:04:18.761 LINK iscsi_tgt 00:04:18.761 TEST_HEADER include/spdk/gpt_spec.h 00:04:18.761 TEST_HEADER include/spdk/hexlify.h 00:04:18.761 TEST_HEADER include/spdk/histogram_data.h 00:04:18.761 TEST_HEADER include/spdk/idxd.h 00:04:18.761 TEST_HEADER include/spdk/idxd_spec.h 00:04:18.761 TEST_HEADER include/spdk/init.h 00:04:18.761 TEST_HEADER include/spdk/ioat.h 00:04:18.761 TEST_HEADER include/spdk/ioat_spec.h 00:04:18.761 TEST_HEADER include/spdk/iscsi_spec.h 00:04:18.761 TEST_HEADER include/spdk/json.h 00:04:18.761 TEST_HEADER include/spdk/jsonrpc.h 00:04:18.761 TEST_HEADER include/spdk/likely.h 00:04:18.761 TEST_HEADER include/spdk/log.h 00:04:18.761 TEST_HEADER include/spdk/lvol.h 00:04:18.761 TEST_HEADER include/spdk/memory.h 00:04:18.761 TEST_HEADER include/spdk/mmio.h 00:04:18.761 TEST_HEADER include/spdk/nbd.h 00:04:18.761 TEST_HEADER include/spdk/notify.h 00:04:18.761 TEST_HEADER include/spdk/nvme.h 00:04:18.761 TEST_HEADER include/spdk/nvme_intel.h 00:04:18.761 LINK bdevio 00:04:18.761 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:18.761 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:18.761 TEST_HEADER include/spdk/nvme_spec.h 00:04:18.761 LINK mkfs 00:04:19.020 TEST_HEADER include/spdk/nvme_zns.h 00:04:19.020 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:19.020 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:19.020 LINK stub 00:04:19.020 TEST_HEADER include/spdk/nvmf.h 00:04:19.020 TEST_HEADER include/spdk/nvmf_spec.h 00:04:19.020 TEST_HEADER include/spdk/nvmf_transport.h 00:04:19.020 TEST_HEADER include/spdk/opal.h 00:04:19.020 TEST_HEADER include/spdk/opal_spec.h 00:04:19.020 TEST_HEADER include/spdk/pci_ids.h 00:04:19.020 TEST_HEADER include/spdk/pipe.h 00:04:19.020 TEST_HEADER include/spdk/queue.h 00:04:19.020 TEST_HEADER include/spdk/reduce.h 00:04:19.020 TEST_HEADER include/spdk/rpc.h 00:04:19.020 TEST_HEADER include/spdk/scheduler.h 00:04:19.020 TEST_HEADER include/spdk/scsi.h 00:04:19.020 TEST_HEADER include/spdk/scsi_spec.h 00:04:19.020 TEST_HEADER include/spdk/sock.h 00:04:19.020 TEST_HEADER include/spdk/stdinc.h 00:04:19.020 TEST_HEADER include/spdk/string.h 00:04:19.020 TEST_HEADER include/spdk/thread.h 00:04:19.020 TEST_HEADER include/spdk/trace.h 00:04:19.020 TEST_HEADER include/spdk/trace_parser.h 00:04:19.020 TEST_HEADER include/spdk/tree.h 00:04:19.020 TEST_HEADER include/spdk/ublk.h 00:04:19.020 TEST_HEADER include/spdk/util.h 00:04:19.020 LINK cmb_copy 00:04:19.020 TEST_HEADER include/spdk/uuid.h 00:04:19.020 TEST_HEADER include/spdk/version.h 00:04:19.020 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:19.020 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:19.020 TEST_HEADER include/spdk/vhost.h 00:04:19.020 TEST_HEADER include/spdk/vmd.h 00:04:19.020 TEST_HEADER include/spdk/xor.h 00:04:19.020 TEST_HEADER include/spdk/zipf.h 00:04:19.020 CXX test/cpp_headers/accel.o 00:04:19.020 CC test/dma/test_dma/test_dma.o 00:04:19.020 CXX test/cpp_headers/accel_module.o 00:04:19.020 CXX test/cpp_headers/assert.o 00:04:19.020 CXX test/cpp_headers/barrier.o 00:04:19.278 CXX test/cpp_headers/base64.o 00:04:19.278 CC app/spdk_tgt/spdk_tgt.o 00:04:19.278 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:19.278 CXX test/cpp_headers/bdev.o 00:04:19.278 LINK abort 00:04:19.278 LINK test_dma 00:04:19.278 CC test/event/event_perf/event_perf.o 00:04:19.278 LINK spdk_tgt 00:04:19.536 LINK pmr_persistence 00:04:19.536 CC test/env/mem_callbacks/mem_callbacks.o 00:04:19.536 CXX test/cpp_headers/bdev_module.o 00:04:19.536 CC test/lvol/esnap/esnap.o 00:04:19.536 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:19.536 LINK event_perf 00:04:19.536 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:19.536 CC test/event/reactor/reactor.o 00:04:19.536 CC app/spdk_lspci/spdk_lspci.o 00:04:19.795 CXX test/cpp_headers/bdev_zone.o 00:04:19.795 CC test/event/reactor_perf/reactor_perf.o 00:04:19.795 CC test/env/vtophys/vtophys.o 00:04:19.795 LINK reactor 00:04:19.795 LINK spdk_lspci 00:04:19.795 LINK vtophys 00:04:19.795 LINK reactor_perf 00:04:19.795 CXX test/cpp_headers/bit_array.o 00:04:19.795 CXX test/cpp_headers/bit_pool.o 00:04:20.053 LINK iscsi_fuzz 00:04:20.053 LINK vhost_fuzz 00:04:20.053 CC app/spdk_nvme_perf/perf.o 00:04:20.053 LINK mem_callbacks 00:04:20.053 CXX test/cpp_headers/blob_bdev.o 00:04:20.053 CC test/event/app_repeat/app_repeat.o 00:04:20.053 CXX test/cpp_headers/blobfs_bdev.o 00:04:20.053 CXX test/cpp_headers/blobfs.o 00:04:20.053 CC test/event/scheduler/scheduler.o 00:04:20.053 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:20.312 CXX test/cpp_headers/blob.o 00:04:20.312 LINK app_repeat 00:04:20.312 CC test/env/pci/pci_ut.o 00:04:20.312 CC test/env/memory/memory_ut.o 00:04:20.312 LINK env_dpdk_post_init 00:04:20.312 LINK scheduler 00:04:20.571 CC app/spdk_nvme_identify/identify.o 00:04:20.571 CXX test/cpp_headers/conf.o 00:04:20.571 CC app/spdk_nvme_discover/discovery_aer.o 00:04:20.571 CXX test/cpp_headers/config.o 00:04:20.571 CC app/spdk_top/spdk_top.o 00:04:20.571 CXX test/cpp_headers/cpuset.o 00:04:20.831 LINK spdk_nvme_discover 00:04:20.831 CC test/nvme/aer/aer.o 00:04:20.831 LINK spdk_nvme_perf 00:04:20.831 LINK pci_ut 00:04:20.831 CXX test/cpp_headers/crc16.o 00:04:20.831 CC test/nvme/reset/reset.o 00:04:21.090 CC test/nvme/sgl/sgl.o 00:04:21.090 CXX test/cpp_headers/crc32.o 00:04:21.090 CXX test/cpp_headers/crc64.o 00:04:21.090 LINK aer 00:04:21.090 CXX test/cpp_headers/dif.o 00:04:21.090 LINK reset 00:04:21.349 CC test/rpc_client/rpc_client_test.o 00:04:21.349 LINK memory_ut 00:04:21.349 CC test/nvme/e2edp/nvme_dp.o 00:04:21.349 LINK spdk_nvme_identify 00:04:21.349 LINK sgl 00:04:21.349 CXX test/cpp_headers/dma.o 00:04:21.608 CC test/nvme/overhead/overhead.o 00:04:21.608 CXX test/cpp_headers/endian.o 00:04:21.608 LINK rpc_client_test 00:04:21.608 LINK spdk_top 00:04:21.608 CC test/nvme/err_injection/err_injection.o 00:04:21.608 CXX test/cpp_headers/env_dpdk.o 00:04:21.608 CC app/vhost/vhost.o 00:04:21.866 LINK nvme_dp 00:04:21.866 CC test/thread/poller_perf/poller_perf.o 00:04:21.866 CC app/spdk_dd/spdk_dd.o 00:04:21.866 LINK overhead 00:04:21.866 CXX test/cpp_headers/env.o 00:04:21.866 LINK err_injection 00:04:21.866 CXX test/cpp_headers/event.o 00:04:22.126 LINK poller_perf 00:04:22.126 LINK vhost 00:04:22.126 CC app/fio/nvme/fio_plugin.o 00:04:22.126 CXX test/cpp_headers/fd_group.o 00:04:22.126 CXX test/cpp_headers/fd.o 00:04:22.126 CC test/nvme/startup/startup.o 00:04:22.126 CXX test/cpp_headers/file.o 00:04:22.386 CC app/fio/bdev/fio_plugin.o 00:04:22.386 LINK spdk_dd 00:04:22.386 CXX test/cpp_headers/ftl.o 00:04:22.386 LINK startup 00:04:22.386 CC test/nvme/reserve/reserve.o 00:04:22.386 CC test/nvme/simple_copy/simple_copy.o 00:04:22.645 CC test/nvme/connect_stress/connect_stress.o 00:04:22.645 CXX test/cpp_headers/gpt_spec.o 00:04:22.645 CC test/nvme/boot_partition/boot_partition.o 00:04:22.645 CC test/nvme/compliance/nvme_compliance.o 00:04:22.645 LINK reserve 00:04:22.645 CXX test/cpp_headers/hexlify.o 00:04:22.645 LINK spdk_nvme 00:04:22.645 LINK connect_stress 00:04:22.904 LINK simple_copy 00:04:22.904 LINK boot_partition 00:04:22.904 CC test/nvme/fused_ordering/fused_ordering.o 00:04:22.904 LINK spdk_bdev 00:04:22.904 LINK nvme_compliance 00:04:22.904 CXX test/cpp_headers/histogram_data.o 00:04:22.904 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:22.904 CXX test/cpp_headers/idxd.o 00:04:22.904 CXX test/cpp_headers/idxd_spec.o 00:04:22.904 CC test/nvme/fdp/fdp.o 00:04:22.904 CXX test/cpp_headers/init.o 00:04:23.163 LINK fused_ordering 00:04:23.163 CXX test/cpp_headers/ioat.o 00:04:23.163 CXX test/cpp_headers/ioat_spec.o 00:04:23.163 CXX test/cpp_headers/iscsi_spec.o 00:04:23.163 CC test/nvme/cuse/cuse.o 00:04:23.163 LINK doorbell_aers 00:04:23.163 CXX test/cpp_headers/json.o 00:04:23.163 CXX test/cpp_headers/jsonrpc.o 00:04:23.163 CXX test/cpp_headers/likely.o 00:04:23.163 CXX test/cpp_headers/log.o 00:04:23.163 LINK fdp 00:04:23.163 CXX test/cpp_headers/lvol.o 00:04:23.163 CXX test/cpp_headers/memory.o 00:04:23.163 CXX test/cpp_headers/mmio.o 00:04:23.422 CXX test/cpp_headers/nbd.o 00:04:23.422 CXX test/cpp_headers/notify.o 00:04:23.422 CXX test/cpp_headers/nvme.o 00:04:23.422 CXX test/cpp_headers/nvme_intel.o 00:04:23.422 CXX test/cpp_headers/nvme_ocssd.o 00:04:23.422 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:23.422 CXX test/cpp_headers/nvme_spec.o 00:04:23.422 CXX test/cpp_headers/nvme_zns.o 00:04:23.422 CXX test/cpp_headers/nvmf_cmd.o 00:04:23.422 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:23.422 CXX test/cpp_headers/nvmf.o 00:04:23.681 CXX test/cpp_headers/nvmf_spec.o 00:04:23.681 CXX test/cpp_headers/nvmf_transport.o 00:04:23.681 CXX test/cpp_headers/opal.o 00:04:23.681 CXX test/cpp_headers/opal_spec.o 00:04:23.681 CXX test/cpp_headers/pci_ids.o 00:04:23.681 CXX test/cpp_headers/pipe.o 00:04:23.681 CXX test/cpp_headers/queue.o 00:04:23.681 CXX test/cpp_headers/reduce.o 00:04:23.681 CXX test/cpp_headers/rpc.o 00:04:23.681 CXX test/cpp_headers/scheduler.o 00:04:23.681 CXX test/cpp_headers/scsi.o 00:04:23.681 CXX test/cpp_headers/scsi_spec.o 00:04:23.940 CXX test/cpp_headers/sock.o 00:04:23.940 CXX test/cpp_headers/stdinc.o 00:04:23.940 CXX test/cpp_headers/string.o 00:04:23.940 CXX test/cpp_headers/thread.o 00:04:23.940 CXX test/cpp_headers/trace.o 00:04:23.940 CXX test/cpp_headers/trace_parser.o 00:04:23.940 CXX test/cpp_headers/tree.o 00:04:23.940 CXX test/cpp_headers/ublk.o 00:04:23.940 CXX test/cpp_headers/util.o 00:04:24.198 CXX test/cpp_headers/uuid.o 00:04:24.198 CXX test/cpp_headers/version.o 00:04:24.198 CXX test/cpp_headers/vfio_user_pci.o 00:04:24.198 CXX test/cpp_headers/vfio_user_spec.o 00:04:24.198 CXX test/cpp_headers/vhost.o 00:04:24.198 LINK cuse 00:04:24.198 CXX test/cpp_headers/vmd.o 00:04:24.199 CXX test/cpp_headers/xor.o 00:04:24.199 CXX test/cpp_headers/zipf.o 00:04:24.458 LINK esnap 00:04:24.717 00:04:24.717 real 0m47.671s 00:04:24.717 user 4m35.813s 00:04:24.717 sys 1m2.892s 00:04:24.717 04:00:26 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:24.717 04:00:26 -- common/autotest_common.sh@10 -- $ set +x 00:04:24.717 ************************************ 00:04:24.717 END TEST make 00:04:24.717 ************************************ 00:04:24.977 04:00:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:24.977 04:00:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:24.977 04:00:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:24.977 04:00:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:24.977 04:00:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:24.977 04:00:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:24.977 04:00:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:24.977 04:00:26 -- scripts/common.sh@335 -- # IFS=.-: 00:04:24.977 04:00:26 -- scripts/common.sh@335 -- # read -ra ver1 00:04:24.977 04:00:26 -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.977 04:00:26 -- scripts/common.sh@336 -- # read -ra ver2 00:04:24.977 04:00:26 -- scripts/common.sh@337 -- # local 'op=<' 00:04:24.977 04:00:26 -- scripts/common.sh@339 -- # ver1_l=2 00:04:24.977 04:00:26 -- scripts/common.sh@340 -- # ver2_l=1 00:04:24.977 04:00:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:24.977 04:00:26 -- scripts/common.sh@343 -- # case "$op" in 00:04:24.977 04:00:26 -- scripts/common.sh@344 -- # : 1 00:04:24.977 04:00:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:24.977 04:00:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.977 04:00:26 -- scripts/common.sh@364 -- # decimal 1 00:04:24.977 04:00:26 -- scripts/common.sh@352 -- # local d=1 00:04:24.977 04:00:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.977 04:00:26 -- scripts/common.sh@354 -- # echo 1 00:04:24.977 04:00:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:24.977 04:00:26 -- scripts/common.sh@365 -- # decimal 2 00:04:24.977 04:00:26 -- scripts/common.sh@352 -- # local d=2 00:04:24.977 04:00:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.977 04:00:26 -- scripts/common.sh@354 -- # echo 2 00:04:24.977 04:00:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:24.977 04:00:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:24.977 04:00:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:24.977 04:00:26 -- scripts/common.sh@367 -- # return 0 00:04:24.977 04:00:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.977 04:00:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:24.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.977 --rc genhtml_branch_coverage=1 00:04:24.977 --rc genhtml_function_coverage=1 00:04:24.977 --rc genhtml_legend=1 00:04:24.977 --rc geninfo_all_blocks=1 00:04:24.977 --rc geninfo_unexecuted_blocks=1 00:04:24.977 00:04:24.977 ' 00:04:24.977 04:00:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:24.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.977 --rc genhtml_branch_coverage=1 00:04:24.977 --rc genhtml_function_coverage=1 00:04:24.977 --rc genhtml_legend=1 00:04:24.977 --rc geninfo_all_blocks=1 00:04:24.977 --rc geninfo_unexecuted_blocks=1 00:04:24.977 00:04:24.977 ' 00:04:24.977 04:00:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:24.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.977 --rc genhtml_branch_coverage=1 00:04:24.977 --rc genhtml_function_coverage=1 00:04:24.977 --rc genhtml_legend=1 00:04:24.977 --rc geninfo_all_blocks=1 00:04:24.977 --rc geninfo_unexecuted_blocks=1 00:04:24.977 00:04:24.977 ' 00:04:24.977 04:00:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:24.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.977 --rc genhtml_branch_coverage=1 00:04:24.977 --rc genhtml_function_coverage=1 00:04:24.978 --rc genhtml_legend=1 00:04:24.978 --rc geninfo_all_blocks=1 00:04:24.978 --rc geninfo_unexecuted_blocks=1 00:04:24.978 00:04:24.978 ' 00:04:24.978 04:00:26 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:24.978 04:00:26 -- nvmf/common.sh@7 -- # uname -s 00:04:24.978 04:00:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:24.978 04:00:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:24.978 04:00:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:24.978 04:00:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:24.978 04:00:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:24.978 04:00:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:24.978 04:00:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:24.978 04:00:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:24.978 04:00:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:24.978 04:00:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:24.978 04:00:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:04:24.978 04:00:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:04:24.978 04:00:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:24.978 04:00:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:24.978 04:00:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:24.978 04:00:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:24.978 04:00:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:24.978 04:00:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.978 04:00:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.978 04:00:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.978 04:00:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.978 04:00:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.978 04:00:26 -- paths/export.sh@5 -- # export PATH 00:04:24.978 04:00:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.978 04:00:26 -- nvmf/common.sh@46 -- # : 0 00:04:24.978 04:00:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:24.978 04:00:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:24.978 04:00:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:24.978 04:00:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:24.978 04:00:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:24.978 04:00:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:24.978 04:00:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:24.978 04:00:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:24.978 04:00:26 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:24.978 04:00:26 -- spdk/autotest.sh@32 -- # uname -s 00:04:24.978 04:00:26 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:24.978 04:00:26 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:24.978 04:00:26 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:24.978 04:00:26 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:24.978 04:00:26 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:24.978 04:00:26 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:24.978 04:00:26 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:24.978 04:00:26 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:24.978 04:00:26 -- spdk/autotest.sh@48 -- # udevadm_pid=61815 00:04:24.978 04:00:26 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:24.978 04:00:26 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:24.978 04:00:26 -- spdk/autotest.sh@54 -- # echo 61817 00:04:24.978 04:00:26 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:24.978 04:00:26 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:24.978 04:00:26 -- spdk/autotest.sh@56 -- # echo 61818 00:04:24.978 04:00:26 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:24.978 04:00:26 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:24.978 04:00:26 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:24.978 04:00:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:24.978 04:00:26 -- common/autotest_common.sh@10 -- # set +x 00:04:24.978 04:00:26 -- spdk/autotest.sh@70 -- # create_test_list 00:04:24.978 04:00:26 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:24.978 04:00:26 -- common/autotest_common.sh@10 -- # set +x 00:04:24.978 04:00:26 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:24.978 04:00:26 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:24.978 04:00:26 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:24.978 04:00:26 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:24.978 04:00:26 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:24.978 04:00:26 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:24.978 04:00:26 -- common/autotest_common.sh@1450 -- # uname 00:04:24.978 04:00:26 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:24.978 04:00:26 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:24.978 04:00:26 -- common/autotest_common.sh@1470 -- # uname 00:04:24.978 04:00:26 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:24.978 04:00:26 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:24.978 04:00:26 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:25.237 lcov: LCOV version 1.15 00:04:25.237 04:00:26 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:31.798 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:31.798 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:31.798 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:31.798 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:31.798 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:31.798 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:49.885 04:00:50 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:49.885 04:00:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.885 04:00:50 -- common/autotest_common.sh@10 -- # set +x 00:04:49.885 04:00:50 -- spdk/autotest.sh@89 -- # rm -f 00:04:49.885 04:00:50 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:49.885 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:49.885 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:49.885 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:49.885 04:00:51 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:49.885 04:00:51 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:49.885 04:00:51 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:49.885 04:00:51 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:49.885 04:00:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:49.885 04:00:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:49.885 04:00:51 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:49.885 04:00:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:49.885 04:00:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:49.885 04:00:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:49.885 04:00:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:49.885 04:00:51 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:49.885 04:00:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:49.885 04:00:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:49.885 04:00:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:49.885 04:00:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:49.885 04:00:51 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:49.885 04:00:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:49.885 04:00:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:49.885 04:00:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:49.885 04:00:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:49.885 04:00:51 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:49.886 04:00:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:49.886 04:00:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:49.886 04:00:51 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:49.886 04:00:51 -- spdk/autotest.sh@108 -- # grep -v p 00:04:49.886 04:00:51 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:04:49.886 04:00:51 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:49.886 04:00:51 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:49.886 04:00:51 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:49.886 04:00:51 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:49.886 04:00:51 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:49.886 No valid GPT data, bailing 00:04:49.886 04:00:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:49.886 04:00:51 -- scripts/common.sh@393 -- # pt= 00:04:49.886 04:00:51 -- scripts/common.sh@394 -- # return 1 00:04:49.886 04:00:51 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:49.886 1+0 records in 00:04:49.886 1+0 records out 00:04:49.886 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00461543 s, 227 MB/s 00:04:49.886 04:00:51 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:49.886 04:00:51 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:49.886 04:00:51 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:04:49.886 04:00:51 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:49.886 04:00:51 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:49.886 No valid GPT data, bailing 00:04:49.886 04:00:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:49.886 04:00:51 -- scripts/common.sh@393 -- # pt= 00:04:49.886 04:00:51 -- scripts/common.sh@394 -- # return 1 00:04:49.886 04:00:51 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:49.886 1+0 records in 00:04:49.886 1+0 records out 00:04:49.886 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508314 s, 206 MB/s 00:04:49.886 04:00:51 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:49.886 04:00:51 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:49.886 04:00:51 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:04:49.886 04:00:51 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:04:49.886 04:00:51 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:49.886 No valid GPT data, bailing 00:04:49.886 04:00:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:49.886 04:00:51 -- scripts/common.sh@393 -- # pt= 00:04:49.886 04:00:51 -- scripts/common.sh@394 -- # return 1 00:04:49.886 04:00:51 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:49.886 1+0 records in 00:04:49.886 1+0 records out 00:04:49.886 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00446432 s, 235 MB/s 00:04:49.886 04:00:51 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:49.886 04:00:51 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:49.886 04:00:51 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:04:49.886 04:00:51 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:04:49.886 04:00:51 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:49.886 No valid GPT data, bailing 00:04:49.886 04:00:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:49.886 04:00:51 -- scripts/common.sh@393 -- # pt= 00:04:49.886 04:00:51 -- scripts/common.sh@394 -- # return 1 00:04:49.886 04:00:51 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:49.886 1+0 records in 00:04:49.886 1+0 records out 00:04:49.886 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00455662 s, 230 MB/s 00:04:49.886 04:00:51 -- spdk/autotest.sh@116 -- # sync 00:04:49.886 04:00:51 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:49.886 04:00:51 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:49.886 04:00:51 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:52.420 04:00:53 -- spdk/autotest.sh@122 -- # uname -s 00:04:52.420 04:00:53 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:04:52.420 04:00:53 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:52.420 04:00:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.420 04:00:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.420 04:00:53 -- common/autotest_common.sh@10 -- # set +x 00:04:52.420 ************************************ 00:04:52.420 START TEST setup.sh 00:04:52.420 ************************************ 00:04:52.420 04:00:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:52.420 * Looking for test storage... 00:04:52.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:52.420 04:00:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:52.420 04:00:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:52.420 04:00:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:52.420 04:00:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:52.420 04:00:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:52.420 04:00:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:52.420 04:00:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:52.420 04:00:53 -- scripts/common.sh@335 -- # IFS=.-: 00:04:52.420 04:00:53 -- scripts/common.sh@335 -- # read -ra ver1 00:04:52.421 04:00:53 -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.421 04:00:53 -- scripts/common.sh@336 -- # read -ra ver2 00:04:52.421 04:00:53 -- scripts/common.sh@337 -- # local 'op=<' 00:04:52.421 04:00:53 -- scripts/common.sh@339 -- # ver1_l=2 00:04:52.421 04:00:53 -- scripts/common.sh@340 -- # ver2_l=1 00:04:52.421 04:00:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:52.421 04:00:53 -- scripts/common.sh@343 -- # case "$op" in 00:04:52.421 04:00:53 -- scripts/common.sh@344 -- # : 1 00:04:52.421 04:00:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:52.421 04:00:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.421 04:00:53 -- scripts/common.sh@364 -- # decimal 1 00:04:52.421 04:00:53 -- scripts/common.sh@352 -- # local d=1 00:04:52.421 04:00:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.421 04:00:53 -- scripts/common.sh@354 -- # echo 1 00:04:52.421 04:00:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:52.421 04:00:53 -- scripts/common.sh@365 -- # decimal 2 00:04:52.421 04:00:53 -- scripts/common.sh@352 -- # local d=2 00:04:52.421 04:00:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.421 04:00:53 -- scripts/common.sh@354 -- # echo 2 00:04:52.421 04:00:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:52.421 04:00:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:52.421 04:00:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:52.421 04:00:53 -- scripts/common.sh@367 -- # return 0 00:04:52.421 04:00:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.421 04:00:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:52.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.421 --rc genhtml_branch_coverage=1 00:04:52.421 --rc genhtml_function_coverage=1 00:04:52.421 --rc genhtml_legend=1 00:04:52.421 --rc geninfo_all_blocks=1 00:04:52.421 --rc geninfo_unexecuted_blocks=1 00:04:52.421 00:04:52.421 ' 00:04:52.421 04:00:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:52.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.421 --rc genhtml_branch_coverage=1 00:04:52.421 --rc genhtml_function_coverage=1 00:04:52.421 --rc genhtml_legend=1 00:04:52.421 --rc geninfo_all_blocks=1 00:04:52.421 --rc geninfo_unexecuted_blocks=1 00:04:52.421 00:04:52.421 ' 00:04:52.421 04:00:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:52.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.421 --rc genhtml_branch_coverage=1 00:04:52.421 --rc genhtml_function_coverage=1 00:04:52.421 --rc genhtml_legend=1 00:04:52.421 --rc geninfo_all_blocks=1 00:04:52.421 --rc geninfo_unexecuted_blocks=1 00:04:52.421 00:04:52.421 ' 00:04:52.421 04:00:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:52.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.421 --rc genhtml_branch_coverage=1 00:04:52.421 --rc genhtml_function_coverage=1 00:04:52.421 --rc genhtml_legend=1 00:04:52.421 --rc geninfo_all_blocks=1 00:04:52.421 --rc geninfo_unexecuted_blocks=1 00:04:52.421 00:04:52.421 ' 00:04:52.421 04:00:53 -- setup/test-setup.sh@10 -- # uname -s 00:04:52.421 04:00:53 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:52.421 04:00:53 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:52.421 04:00:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.421 04:00:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.421 04:00:53 -- common/autotest_common.sh@10 -- # set +x 00:04:52.421 ************************************ 00:04:52.421 START TEST acl 00:04:52.421 ************************************ 00:04:52.421 04:00:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:52.421 * Looking for test storage... 00:04:52.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:52.421 04:00:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:52.421 04:00:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:52.421 04:00:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:52.421 04:00:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:52.421 04:00:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:52.421 04:00:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:52.421 04:00:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:52.421 04:00:54 -- scripts/common.sh@335 -- # IFS=.-: 00:04:52.421 04:00:54 -- scripts/common.sh@335 -- # read -ra ver1 00:04:52.421 04:00:54 -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.421 04:00:54 -- scripts/common.sh@336 -- # read -ra ver2 00:04:52.421 04:00:54 -- scripts/common.sh@337 -- # local 'op=<' 00:04:52.421 04:00:54 -- scripts/common.sh@339 -- # ver1_l=2 00:04:52.421 04:00:54 -- scripts/common.sh@340 -- # ver2_l=1 00:04:52.421 04:00:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:52.421 04:00:54 -- scripts/common.sh@343 -- # case "$op" in 00:04:52.421 04:00:54 -- scripts/common.sh@344 -- # : 1 00:04:52.421 04:00:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:52.421 04:00:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.421 04:00:54 -- scripts/common.sh@364 -- # decimal 1 00:04:52.421 04:00:54 -- scripts/common.sh@352 -- # local d=1 00:04:52.421 04:00:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.421 04:00:54 -- scripts/common.sh@354 -- # echo 1 00:04:52.421 04:00:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:52.421 04:00:54 -- scripts/common.sh@365 -- # decimal 2 00:04:52.421 04:00:54 -- scripts/common.sh@352 -- # local d=2 00:04:52.421 04:00:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.421 04:00:54 -- scripts/common.sh@354 -- # echo 2 00:04:52.421 04:00:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:52.421 04:00:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:52.421 04:00:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:52.421 04:00:54 -- scripts/common.sh@367 -- # return 0 00:04:52.421 04:00:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.421 04:00:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:52.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.421 --rc genhtml_branch_coverage=1 00:04:52.421 --rc genhtml_function_coverage=1 00:04:52.421 --rc genhtml_legend=1 00:04:52.421 --rc geninfo_all_blocks=1 00:04:52.421 --rc geninfo_unexecuted_blocks=1 00:04:52.421 00:04:52.421 ' 00:04:52.421 04:00:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:52.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.421 --rc genhtml_branch_coverage=1 00:04:52.421 --rc genhtml_function_coverage=1 00:04:52.421 --rc genhtml_legend=1 00:04:52.421 --rc geninfo_all_blocks=1 00:04:52.421 --rc geninfo_unexecuted_blocks=1 00:04:52.421 00:04:52.421 ' 00:04:52.421 04:00:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:52.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.421 --rc genhtml_branch_coverage=1 00:04:52.421 --rc genhtml_function_coverage=1 00:04:52.421 --rc genhtml_legend=1 00:04:52.421 --rc geninfo_all_blocks=1 00:04:52.421 --rc geninfo_unexecuted_blocks=1 00:04:52.421 00:04:52.421 ' 00:04:52.421 04:00:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:52.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.421 --rc genhtml_branch_coverage=1 00:04:52.421 --rc genhtml_function_coverage=1 00:04:52.421 --rc genhtml_legend=1 00:04:52.421 --rc geninfo_all_blocks=1 00:04:52.421 --rc geninfo_unexecuted_blocks=1 00:04:52.421 00:04:52.421 ' 00:04:52.421 04:00:54 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:52.421 04:00:54 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:52.421 04:00:54 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:52.421 04:00:54 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:52.421 04:00:54 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:52.421 04:00:54 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:52.421 04:00:54 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:52.421 04:00:54 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:52.421 04:00:54 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:52.421 04:00:54 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:52.421 04:00:54 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:52.421 04:00:54 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:52.421 04:00:54 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:52.421 04:00:54 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:52.421 04:00:54 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:52.421 04:00:54 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:52.421 04:00:54 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:52.421 04:00:54 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:52.421 04:00:54 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:52.421 04:00:54 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:52.421 04:00:54 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:52.421 04:00:54 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:52.421 04:00:54 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:52.421 04:00:54 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:52.421 04:00:54 -- setup/acl.sh@12 -- # devs=() 00:04:52.421 04:00:54 -- setup/acl.sh@12 -- # declare -a devs 00:04:52.421 04:00:54 -- setup/acl.sh@13 -- # drivers=() 00:04:52.421 04:00:54 -- setup/acl.sh@13 -- # declare -A drivers 00:04:52.421 04:00:54 -- setup/acl.sh@51 -- # setup reset 00:04:52.421 04:00:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:52.422 04:00:54 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:53.375 04:00:54 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:53.375 04:00:54 -- setup/acl.sh@16 -- # local dev driver 00:04:53.375 04:00:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.375 04:00:54 -- setup/acl.sh@15 -- # setup output status 00:04:53.375 04:00:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.375 04:00:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:53.375 Hugepages 00:04:53.375 node hugesize free / total 00:04:53.375 04:00:55 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:53.375 04:00:55 -- setup/acl.sh@19 -- # continue 00:04:53.375 04:00:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.375 00:04:53.375 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:53.375 04:00:55 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:53.375 04:00:55 -- setup/acl.sh@19 -- # continue 00:04:53.375 04:00:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.634 04:00:55 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:53.634 04:00:55 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:53.634 04:00:55 -- setup/acl.sh@20 -- # continue 00:04:53.634 04:00:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.634 04:00:55 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:53.634 04:00:55 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:53.634 04:00:55 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:53.634 04:00:55 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:53.634 04:00:55 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:53.634 04:00:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.634 04:00:55 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:53.634 04:00:55 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:53.634 04:00:55 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:53.634 04:00:55 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:53.634 04:00:55 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:53.634 04:00:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.634 04:00:55 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:53.634 04:00:55 -- setup/acl.sh@54 -- # run_test denied denied 00:04:53.634 04:00:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.634 04:00:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.634 04:00:55 -- common/autotest_common.sh@10 -- # set +x 00:04:53.634 ************************************ 00:04:53.634 START TEST denied 00:04:53.634 ************************************ 00:04:53.634 04:00:55 -- common/autotest_common.sh@1114 -- # denied 00:04:53.634 04:00:55 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:53.634 04:00:55 -- setup/acl.sh@38 -- # setup output config 00:04:53.634 04:00:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.634 04:00:55 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:53.634 04:00:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:54.571 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:54.571 04:00:56 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:54.571 04:00:56 -- setup/acl.sh@28 -- # local dev driver 00:04:54.571 04:00:56 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:54.571 04:00:56 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:54.571 04:00:56 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:54.571 04:00:56 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:54.571 04:00:56 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:54.571 04:00:56 -- setup/acl.sh@41 -- # setup reset 00:04:54.571 04:00:56 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:54.571 04:00:56 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:55.507 00:04:55.507 real 0m1.569s 00:04:55.507 user 0m0.629s 00:04:55.507 sys 0m0.890s 00:04:55.507 04:00:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:55.507 04:00:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.507 ************************************ 00:04:55.507 END TEST denied 00:04:55.507 ************************************ 00:04:55.508 04:00:56 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:55.508 04:00:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.508 04:00:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.508 04:00:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.508 ************************************ 00:04:55.508 START TEST allowed 00:04:55.508 ************************************ 00:04:55.508 04:00:56 -- common/autotest_common.sh@1114 -- # allowed 00:04:55.508 04:00:56 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:55.508 04:00:56 -- setup/acl.sh@45 -- # setup output config 00:04:55.508 04:00:56 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:55.508 04:00:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.508 04:00:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:56.076 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:56.076 04:00:57 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:04:56.076 04:00:57 -- setup/acl.sh@28 -- # local dev driver 00:04:56.076 04:00:57 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:56.076 04:00:57 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:56.076 04:00:57 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:56.076 04:00:57 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:56.076 04:00:57 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:56.076 04:00:57 -- setup/acl.sh@48 -- # setup reset 00:04:56.076 04:00:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:56.076 04:00:57 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:57.015 00:04:57.015 real 0m1.608s 00:04:57.015 user 0m0.705s 00:04:57.015 sys 0m0.911s 00:04:57.015 04:00:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.015 04:00:58 -- common/autotest_common.sh@10 -- # set +x 00:04:57.015 ************************************ 00:04:57.015 END TEST allowed 00:04:57.015 ************************************ 00:04:57.015 ************************************ 00:04:57.015 END TEST acl 00:04:57.015 ************************************ 00:04:57.015 00:04:57.015 real 0m4.640s 00:04:57.015 user 0m2.006s 00:04:57.015 sys 0m2.612s 00:04:57.015 04:00:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.015 04:00:58 -- common/autotest_common.sh@10 -- # set +x 00:04:57.015 04:00:58 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:57.015 04:00:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.015 04:00:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.015 04:00:58 -- common/autotest_common.sh@10 -- # set +x 00:04:57.015 ************************************ 00:04:57.015 START TEST hugepages 00:04:57.015 ************************************ 00:04:57.015 04:00:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:57.015 * Looking for test storage... 00:04:57.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:57.015 04:00:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:57.015 04:00:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:57.015 04:00:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:57.275 04:00:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:57.275 04:00:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:57.275 04:00:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:57.275 04:00:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:57.275 04:00:58 -- scripts/common.sh@335 -- # IFS=.-: 00:04:57.275 04:00:58 -- scripts/common.sh@335 -- # read -ra ver1 00:04:57.275 04:00:58 -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.275 04:00:58 -- scripts/common.sh@336 -- # read -ra ver2 00:04:57.275 04:00:58 -- scripts/common.sh@337 -- # local 'op=<' 00:04:57.275 04:00:58 -- scripts/common.sh@339 -- # ver1_l=2 00:04:57.275 04:00:58 -- scripts/common.sh@340 -- # ver2_l=1 00:04:57.275 04:00:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:57.275 04:00:58 -- scripts/common.sh@343 -- # case "$op" in 00:04:57.275 04:00:58 -- scripts/common.sh@344 -- # : 1 00:04:57.275 04:00:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:57.275 04:00:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.275 04:00:58 -- scripts/common.sh@364 -- # decimal 1 00:04:57.275 04:00:58 -- scripts/common.sh@352 -- # local d=1 00:04:57.275 04:00:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.275 04:00:58 -- scripts/common.sh@354 -- # echo 1 00:04:57.275 04:00:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:57.275 04:00:58 -- scripts/common.sh@365 -- # decimal 2 00:04:57.275 04:00:58 -- scripts/common.sh@352 -- # local d=2 00:04:57.275 04:00:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.275 04:00:58 -- scripts/common.sh@354 -- # echo 2 00:04:57.275 04:00:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:57.275 04:00:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:57.275 04:00:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:57.275 04:00:58 -- scripts/common.sh@367 -- # return 0 00:04:57.275 04:00:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.275 04:00:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:57.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.275 --rc genhtml_branch_coverage=1 00:04:57.275 --rc genhtml_function_coverage=1 00:04:57.275 --rc genhtml_legend=1 00:04:57.275 --rc geninfo_all_blocks=1 00:04:57.275 --rc geninfo_unexecuted_blocks=1 00:04:57.275 00:04:57.275 ' 00:04:57.275 04:00:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:57.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.275 --rc genhtml_branch_coverage=1 00:04:57.275 --rc genhtml_function_coverage=1 00:04:57.275 --rc genhtml_legend=1 00:04:57.275 --rc geninfo_all_blocks=1 00:04:57.275 --rc geninfo_unexecuted_blocks=1 00:04:57.275 00:04:57.275 ' 00:04:57.275 04:00:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:57.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.275 --rc genhtml_branch_coverage=1 00:04:57.275 --rc genhtml_function_coverage=1 00:04:57.275 --rc genhtml_legend=1 00:04:57.275 --rc geninfo_all_blocks=1 00:04:57.275 --rc geninfo_unexecuted_blocks=1 00:04:57.275 00:04:57.275 ' 00:04:57.275 04:00:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:57.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.275 --rc genhtml_branch_coverage=1 00:04:57.275 --rc genhtml_function_coverage=1 00:04:57.275 --rc genhtml_legend=1 00:04:57.275 --rc geninfo_all_blocks=1 00:04:57.275 --rc geninfo_unexecuted_blocks=1 00:04:57.275 00:04:57.275 ' 00:04:57.275 04:00:58 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:57.275 04:00:58 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:57.275 04:00:58 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:57.275 04:00:58 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:57.275 04:00:58 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:57.275 04:00:58 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:57.275 04:00:58 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:57.275 04:00:58 -- setup/common.sh@18 -- # local node= 00:04:57.275 04:00:58 -- setup/common.sh@19 -- # local var val 00:04:57.275 04:00:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.275 04:00:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.275 04:00:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.275 04:00:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.275 04:00:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.275 04:00:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.275 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.275 04:00:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 4415072 kB' 'MemAvailable: 7341640 kB' 'Buffers: 2684 kB' 'Cached: 3127288 kB' 'SwapCached: 0 kB' 'Active: 496456 kB' 'Inactive: 2750308 kB' 'Active(anon): 127304 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750308 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118520 kB' 'Mapped: 50800 kB' 'Shmem: 10512 kB' 'KReclaimable: 88516 kB' 'Slab: 190996 kB' 'SReclaimable: 88516 kB' 'SUnreclaim: 102480 kB' 'KernelStack: 6864 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411004 kB' 'Committed_AS: 320760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:57.275 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.275 04:00:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.275 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.275 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.275 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.275 04:00:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.275 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.275 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.275 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.275 04:00:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.275 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.275 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.275 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.275 04:00:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.275 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.275 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.275 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.275 04:00:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.275 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.275 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.275 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.275 04:00:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.275 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.275 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.275 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.275 04:00:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.275 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.275 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.275 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.275 04:00:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.275 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.275 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.275 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.275 04:00:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.275 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.275 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.275 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.275 04:00:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.275 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.276 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.276 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.277 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.277 04:00:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.277 04:00:58 -- setup/common.sh@32 -- # continue 00:04:57.277 04:00:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.277 04:00:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.277 04:00:58 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:57.277 04:00:58 -- setup/common.sh@33 -- # echo 2048 00:04:57.277 04:00:58 -- setup/common.sh@33 -- # return 0 00:04:57.277 04:00:58 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:57.277 04:00:58 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:57.277 04:00:58 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:57.277 04:00:58 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:57.277 04:00:58 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:57.277 04:00:58 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:57.277 04:00:58 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:57.277 04:00:58 -- setup/hugepages.sh@207 -- # get_nodes 00:04:57.277 04:00:58 -- setup/hugepages.sh@27 -- # local node 00:04:57.277 04:00:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.277 04:00:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:57.277 04:00:58 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:57.277 04:00:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:57.277 04:00:58 -- setup/hugepages.sh@208 -- # clear_hp 00:04:57.277 04:00:58 -- setup/hugepages.sh@37 -- # local node hp 00:04:57.277 04:00:58 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:57.277 04:00:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:57.277 04:00:58 -- setup/hugepages.sh@41 -- # echo 0 00:04:57.277 04:00:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:57.277 04:00:58 -- setup/hugepages.sh@41 -- # echo 0 00:04:57.277 04:00:58 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:57.277 04:00:58 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:57.277 04:00:58 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:57.277 04:00:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.277 04:00:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.277 04:00:58 -- common/autotest_common.sh@10 -- # set +x 00:04:57.277 ************************************ 00:04:57.277 START TEST default_setup 00:04:57.277 ************************************ 00:04:57.277 04:00:58 -- common/autotest_common.sh@1114 -- # default_setup 00:04:57.277 04:00:58 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:57.277 04:00:58 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:57.277 04:00:58 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:57.277 04:00:58 -- setup/hugepages.sh@51 -- # shift 00:04:57.277 04:00:58 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:57.277 04:00:58 -- setup/hugepages.sh@52 -- # local node_ids 00:04:57.277 04:00:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:57.277 04:00:58 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:57.277 04:00:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:57.277 04:00:58 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:57.277 04:00:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:57.277 04:00:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:57.277 04:00:58 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:57.277 04:00:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:57.277 04:00:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:57.277 04:00:58 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:57.277 04:00:58 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:57.277 04:00:58 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:57.277 04:00:58 -- setup/hugepages.sh@73 -- # return 0 00:04:57.277 04:00:58 -- setup/hugepages.sh@137 -- # setup output 00:04:57.277 04:00:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.277 04:00:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:57.845 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:58.105 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:58.105 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:58.105 04:00:59 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:58.105 04:00:59 -- setup/hugepages.sh@89 -- # local node 00:04:58.105 04:00:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:58.105 04:00:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:58.105 04:00:59 -- setup/hugepages.sh@92 -- # local surp 00:04:58.105 04:00:59 -- setup/hugepages.sh@93 -- # local resv 00:04:58.105 04:00:59 -- setup/hugepages.sh@94 -- # local anon 00:04:58.105 04:00:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:58.105 04:00:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:58.105 04:00:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:58.105 04:00:59 -- setup/common.sh@18 -- # local node= 00:04:58.105 04:00:59 -- setup/common.sh@19 -- # local var val 00:04:58.105 04:00:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.105 04:00:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.105 04:00:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.105 04:00:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.105 04:00:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.105 04:00:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.105 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.105 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.105 04:00:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6486528 kB' 'MemAvailable: 9412900 kB' 'Buffers: 2684 kB' 'Cached: 3127284 kB' 'SwapCached: 0 kB' 'Active: 498208 kB' 'Inactive: 2750308 kB' 'Active(anon): 129056 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750308 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119912 kB' 'Mapped: 50992 kB' 'Shmem: 10492 kB' 'KReclaimable: 88124 kB' 'Slab: 190820 kB' 'SReclaimable: 88124 kB' 'SUnreclaim: 102696 kB' 'KernelStack: 6832 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 323024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:58.105 04:00:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.105 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.105 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.105 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.105 04:00:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.105 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.105 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.105 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.105 04:00:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.105 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.105 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.105 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.105 04:00:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.105 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.105 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.105 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.105 04:00:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.105 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.105 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.105 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.105 04:00:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.105 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.105 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.105 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.105 04:00:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.106 04:00:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.106 04:00:59 -- setup/common.sh@33 -- # echo 0 00:04:58.106 04:00:59 -- setup/common.sh@33 -- # return 0 00:04:58.106 04:00:59 -- setup/hugepages.sh@97 -- # anon=0 00:04:58.106 04:00:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:58.106 04:00:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.106 04:00:59 -- setup/common.sh@18 -- # local node= 00:04:58.106 04:00:59 -- setup/common.sh@19 -- # local var val 00:04:58.106 04:00:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.106 04:00:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.106 04:00:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.106 04:00:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.106 04:00:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.106 04:00:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.106 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6485772 kB' 'MemAvailable: 9412152 kB' 'Buffers: 2684 kB' 'Cached: 3127284 kB' 'SwapCached: 0 kB' 'Active: 497808 kB' 'Inactive: 2750316 kB' 'Active(anon): 128656 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119744 kB' 'Mapped: 51008 kB' 'Shmem: 10492 kB' 'KReclaimable: 88124 kB' 'Slab: 190812 kB' 'SReclaimable: 88124 kB' 'SUnreclaim: 102688 kB' 'KernelStack: 6768 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 323024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.107 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.107 04:00:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.108 04:00:59 -- setup/common.sh@33 -- # echo 0 00:04:58.108 04:00:59 -- setup/common.sh@33 -- # return 0 00:04:58.108 04:00:59 -- setup/hugepages.sh@99 -- # surp=0 00:04:58.108 04:00:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:58.108 04:00:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:58.108 04:00:59 -- setup/common.sh@18 -- # local node= 00:04:58.108 04:00:59 -- setup/common.sh@19 -- # local var val 00:04:58.108 04:00:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.108 04:00:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.108 04:00:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.108 04:00:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.108 04:00:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.108 04:00:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6485524 kB' 'MemAvailable: 9411900 kB' 'Buffers: 2684 kB' 'Cached: 3127280 kB' 'SwapCached: 0 kB' 'Active: 497676 kB' 'Inactive: 2750316 kB' 'Active(anon): 128524 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119616 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88120 kB' 'Slab: 190804 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102684 kB' 'KernelStack: 6784 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 323024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.108 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.108 04:00:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.109 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.109 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.109 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.109 04:00:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.109 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.109 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.109 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.109 04:00:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.109 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.109 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.109 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.109 04:00:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.109 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.109 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.109 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.109 04:00:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.109 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.109 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.109 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.109 04:00:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.109 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.109 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.109 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.109 04:00:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.109 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.109 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.109 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.109 04:00:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.109 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.109 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.109 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.109 04:00:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.109 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.109 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.109 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.109 04:00:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.109 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.109 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.109 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.109 04:00:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.109 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.370 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.370 04:00:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.371 04:00:59 -- setup/common.sh@33 -- # echo 0 00:04:58.371 04:00:59 -- setup/common.sh@33 -- # return 0 00:04:58.371 04:00:59 -- setup/hugepages.sh@100 -- # resv=0 00:04:58.371 nr_hugepages=1024 00:04:58.371 04:00:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:58.371 resv_hugepages=0 00:04:58.371 04:00:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:58.371 surplus_hugepages=0 00:04:58.371 04:00:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:58.371 anon_hugepages=0 00:04:58.371 04:00:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:58.371 04:00:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.371 04:00:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:58.371 04:00:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:58.371 04:00:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:58.371 04:00:59 -- setup/common.sh@18 -- # local node= 00:04:58.371 04:00:59 -- setup/common.sh@19 -- # local var val 00:04:58.371 04:00:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.371 04:00:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.371 04:00:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.371 04:00:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.371 04:00:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.371 04:00:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6485524 kB' 'MemAvailable: 9411900 kB' 'Buffers: 2684 kB' 'Cached: 3127280 kB' 'SwapCached: 0 kB' 'Active: 497880 kB' 'Inactive: 2750316 kB' 'Active(anon): 128728 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119820 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88120 kB' 'Slab: 190800 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102680 kB' 'KernelStack: 6768 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 323024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.371 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.371 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.372 04:00:59 -- setup/common.sh@33 -- # echo 1024 00:04:58.372 04:00:59 -- setup/common.sh@33 -- # return 0 00:04:58.372 04:00:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.372 04:00:59 -- setup/hugepages.sh@112 -- # get_nodes 00:04:58.372 04:00:59 -- setup/hugepages.sh@27 -- # local node 00:04:58.372 04:00:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.372 04:00:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:58.372 04:00:59 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:58.372 04:00:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:58.372 04:00:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:58.372 04:00:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:58.372 04:00:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:58.372 04:00:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.372 04:00:59 -- setup/common.sh@18 -- # local node=0 00:04:58.372 04:00:59 -- setup/common.sh@19 -- # local var val 00:04:58.372 04:00:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.372 04:00:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.372 04:00:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:58.372 04:00:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:58.372 04:00:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.372 04:00:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6485272 kB' 'MemUsed: 5753836 kB' 'SwapCached: 0 kB' 'Active: 497768 kB' 'Inactive: 2750316 kB' 'Active(anon): 128616 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 3129964 kB' 'Mapped: 50900 kB' 'AnonPages: 119712 kB' 'Shmem: 10488 kB' 'KernelStack: 6820 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88120 kB' 'Slab: 190804 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.372 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.372 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # continue 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.373 04:00:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.373 04:00:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.373 04:00:59 -- setup/common.sh@33 -- # echo 0 00:04:58.373 04:00:59 -- setup/common.sh@33 -- # return 0 00:04:58.373 04:00:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.373 04:00:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.373 04:00:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.373 04:00:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.373 node0=1024 expecting 1024 00:04:58.373 04:00:59 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:58.373 04:00:59 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:58.373 00:04:58.373 real 0m1.025s 00:04:58.373 user 0m0.525s 00:04:58.373 sys 0m0.443s 00:04:58.373 04:00:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:58.373 04:00:59 -- common/autotest_common.sh@10 -- # set +x 00:04:58.373 ************************************ 00:04:58.373 END TEST default_setup 00:04:58.373 ************************************ 00:04:58.373 04:00:59 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:58.373 04:00:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.373 04:00:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.373 04:00:59 -- common/autotest_common.sh@10 -- # set +x 00:04:58.373 ************************************ 00:04:58.373 START TEST per_node_1G_alloc 00:04:58.373 ************************************ 00:04:58.373 04:00:59 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:58.373 04:00:59 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:58.373 04:00:59 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:58.373 04:00:59 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:58.373 04:00:59 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:58.373 04:00:59 -- setup/hugepages.sh@51 -- # shift 00:04:58.373 04:00:59 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:58.373 04:00:59 -- setup/hugepages.sh@52 -- # local node_ids 00:04:58.373 04:00:59 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:58.373 04:00:59 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:58.373 04:00:59 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:58.373 04:00:59 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:58.373 04:00:59 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:58.373 04:00:59 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:58.373 04:00:59 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:58.373 04:00:59 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:58.373 04:00:59 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:58.373 04:00:59 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:58.373 04:00:59 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:58.373 04:00:59 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:58.373 04:00:59 -- setup/hugepages.sh@73 -- # return 0 00:04:58.373 04:00:59 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:58.373 04:00:59 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:58.374 04:00:59 -- setup/hugepages.sh@146 -- # setup output 00:04:58.374 04:00:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.374 04:00:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:58.632 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:58.632 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:58.632 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:58.895 04:01:00 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:58.895 04:01:00 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:58.895 04:01:00 -- setup/hugepages.sh@89 -- # local node 00:04:58.895 04:01:00 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:58.895 04:01:00 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:58.895 04:01:00 -- setup/hugepages.sh@92 -- # local surp 00:04:58.895 04:01:00 -- setup/hugepages.sh@93 -- # local resv 00:04:58.895 04:01:00 -- setup/hugepages.sh@94 -- # local anon 00:04:58.895 04:01:00 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:58.895 04:01:00 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:58.895 04:01:00 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:58.895 04:01:00 -- setup/common.sh@18 -- # local node= 00:04:58.895 04:01:00 -- setup/common.sh@19 -- # local var val 00:04:58.895 04:01:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.895 04:01:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.895 04:01:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.895 04:01:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.895 04:01:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.895 04:01:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.895 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.895 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7542832 kB' 'MemAvailable: 10469216 kB' 'Buffers: 2684 kB' 'Cached: 3127280 kB' 'SwapCached: 0 kB' 'Active: 498076 kB' 'Inactive: 2750324 kB' 'Active(anon): 128924 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120016 kB' 'Mapped: 51016 kB' 'Shmem: 10488 kB' 'KReclaimable: 88120 kB' 'Slab: 190860 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102740 kB' 'KernelStack: 6804 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 323024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.896 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.896 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.897 04:01:00 -- setup/common.sh@33 -- # echo 0 00:04:58.897 04:01:00 -- setup/common.sh@33 -- # return 0 00:04:58.897 04:01:00 -- setup/hugepages.sh@97 -- # anon=0 00:04:58.897 04:01:00 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:58.897 04:01:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.897 04:01:00 -- setup/common.sh@18 -- # local node= 00:04:58.897 04:01:00 -- setup/common.sh@19 -- # local var val 00:04:58.897 04:01:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.897 04:01:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.897 04:01:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.897 04:01:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.897 04:01:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.897 04:01:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7542832 kB' 'MemAvailable: 10469216 kB' 'Buffers: 2684 kB' 'Cached: 3127280 kB' 'SwapCached: 0 kB' 'Active: 497848 kB' 'Inactive: 2750324 kB' 'Active(anon): 128696 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119852 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88120 kB' 'Slab: 190872 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102752 kB' 'KernelStack: 6832 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 323024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.897 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.897 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.898 04:01:00 -- setup/common.sh@33 -- # echo 0 00:04:58.898 04:01:00 -- setup/common.sh@33 -- # return 0 00:04:58.898 04:01:00 -- setup/hugepages.sh@99 -- # surp=0 00:04:58.898 04:01:00 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:58.898 04:01:00 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:58.898 04:01:00 -- setup/common.sh@18 -- # local node= 00:04:58.898 04:01:00 -- setup/common.sh@19 -- # local var val 00:04:58.898 04:01:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.898 04:01:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.898 04:01:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.898 04:01:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.898 04:01:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.898 04:01:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7543408 kB' 'MemAvailable: 10469792 kB' 'Buffers: 2684 kB' 'Cached: 3127280 kB' 'SwapCached: 0 kB' 'Active: 497868 kB' 'Inactive: 2750324 kB' 'Active(anon): 128716 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119824 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88120 kB' 'Slab: 190872 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102752 kB' 'KernelStack: 6800 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 323024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.898 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.898 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.899 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.899 04:01:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.899 04:01:00 -- setup/common.sh@33 -- # echo 0 00:04:58.900 04:01:00 -- setup/common.sh@33 -- # return 0 00:04:58.900 04:01:00 -- setup/hugepages.sh@100 -- # resv=0 00:04:58.900 nr_hugepages=512 00:04:58.900 04:01:00 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:58.900 resv_hugepages=0 00:04:58.900 04:01:00 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:58.900 surplus_hugepages=0 00:04:58.900 04:01:00 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:58.900 anon_hugepages=0 00:04:58.900 04:01:00 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:58.900 04:01:00 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:58.900 04:01:00 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:58.900 04:01:00 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:58.900 04:01:00 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:58.900 04:01:00 -- setup/common.sh@18 -- # local node= 00:04:58.900 04:01:00 -- setup/common.sh@19 -- # local var val 00:04:58.900 04:01:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.900 04:01:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.900 04:01:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.900 04:01:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.900 04:01:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.900 04:01:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.900 04:01:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7543408 kB' 'MemAvailable: 10469792 kB' 'Buffers: 2684 kB' 'Cached: 3127280 kB' 'SwapCached: 0 kB' 'Active: 497896 kB' 'Inactive: 2750324 kB' 'Active(anon): 128744 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119852 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88120 kB' 'Slab: 190868 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102748 kB' 'KernelStack: 6816 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 323024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.900 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.900 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.901 04:01:00 -- setup/common.sh@33 -- # echo 512 00:04:58.901 04:01:00 -- setup/common.sh@33 -- # return 0 00:04:58.901 04:01:00 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:58.901 04:01:00 -- setup/hugepages.sh@112 -- # get_nodes 00:04:58.901 04:01:00 -- setup/hugepages.sh@27 -- # local node 00:04:58.901 04:01:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.901 04:01:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:58.901 04:01:00 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:58.901 04:01:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:58.901 04:01:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:58.901 04:01:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:58.901 04:01:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:58.901 04:01:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.901 04:01:00 -- setup/common.sh@18 -- # local node=0 00:04:58.901 04:01:00 -- setup/common.sh@19 -- # local var val 00:04:58.901 04:01:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.901 04:01:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.901 04:01:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:58.901 04:01:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:58.901 04:01:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.901 04:01:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7543408 kB' 'MemUsed: 4695700 kB' 'SwapCached: 0 kB' 'Active: 497796 kB' 'Inactive: 2750324 kB' 'Active(anon): 128644 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 3129964 kB' 'Mapped: 50900 kB' 'AnonPages: 119752 kB' 'Shmem: 10488 kB' 'KernelStack: 6816 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88120 kB' 'Slab: 190864 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102744 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.901 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.901 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # continue 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.902 04:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.902 04:01:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.902 04:01:00 -- setup/common.sh@33 -- # echo 0 00:04:58.902 04:01:00 -- setup/common.sh@33 -- # return 0 00:04:58.902 04:01:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.902 04:01:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.902 04:01:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.902 04:01:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.902 node0=512 expecting 512 00:04:58.902 04:01:00 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:58.902 04:01:00 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:58.902 00:04:58.902 real 0m0.604s 00:04:58.902 user 0m0.267s 00:04:58.902 sys 0m0.347s 00:04:58.902 04:01:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:58.902 04:01:00 -- common/autotest_common.sh@10 -- # set +x 00:04:58.902 ************************************ 00:04:58.902 END TEST per_node_1G_alloc 00:04:58.902 ************************************ 00:04:58.902 04:01:00 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:58.902 04:01:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.902 04:01:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.902 04:01:00 -- common/autotest_common.sh@10 -- # set +x 00:04:58.902 ************************************ 00:04:58.902 START TEST even_2G_alloc 00:04:58.902 ************************************ 00:04:58.902 04:01:00 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:58.902 04:01:00 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:58.902 04:01:00 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:58.902 04:01:00 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:58.902 04:01:00 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:58.902 04:01:00 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:58.902 04:01:00 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:58.902 04:01:00 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:58.902 04:01:00 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:58.902 04:01:00 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:58.902 04:01:00 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:58.902 04:01:00 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:59.161 04:01:00 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:59.161 04:01:00 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:59.161 04:01:00 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:59.161 04:01:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:59.161 04:01:00 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:59.161 04:01:00 -- setup/hugepages.sh@83 -- # : 0 00:04:59.161 04:01:00 -- setup/hugepages.sh@84 -- # : 0 00:04:59.161 04:01:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:59.161 04:01:00 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:59.161 04:01:00 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:59.161 04:01:00 -- setup/hugepages.sh@153 -- # setup output 00:04:59.161 04:01:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.161 04:01:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:59.425 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:59.425 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:59.425 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:59.425 04:01:01 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:59.425 04:01:01 -- setup/hugepages.sh@89 -- # local node 00:04:59.425 04:01:01 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:59.425 04:01:01 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:59.425 04:01:01 -- setup/hugepages.sh@92 -- # local surp 00:04:59.425 04:01:01 -- setup/hugepages.sh@93 -- # local resv 00:04:59.425 04:01:01 -- setup/hugepages.sh@94 -- # local anon 00:04:59.425 04:01:01 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:59.425 04:01:01 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:59.425 04:01:01 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:59.425 04:01:01 -- setup/common.sh@18 -- # local node= 00:04:59.425 04:01:01 -- setup/common.sh@19 -- # local var val 00:04:59.425 04:01:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.425 04:01:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.425 04:01:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.425 04:01:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.425 04:01:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.425 04:01:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.425 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.425 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.425 04:01:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6488816 kB' 'MemAvailable: 9415204 kB' 'Buffers: 2684 kB' 'Cached: 3127284 kB' 'SwapCached: 0 kB' 'Active: 497676 kB' 'Inactive: 2750328 kB' 'Active(anon): 128524 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119608 kB' 'Mapped: 51024 kB' 'Shmem: 10488 kB' 'KReclaimable: 88120 kB' 'Slab: 190836 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102716 kB' 'KernelStack: 6808 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 323024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.426 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.426 04:01:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.426 04:01:01 -- setup/common.sh@33 -- # echo 0 00:04:59.426 04:01:01 -- setup/common.sh@33 -- # return 0 00:04:59.427 04:01:01 -- setup/hugepages.sh@97 -- # anon=0 00:04:59.427 04:01:01 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:59.427 04:01:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.427 04:01:01 -- setup/common.sh@18 -- # local node= 00:04:59.427 04:01:01 -- setup/common.sh@19 -- # local var val 00:04:59.427 04:01:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.427 04:01:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.427 04:01:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.427 04:01:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.427 04:01:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.427 04:01:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6489088 kB' 'MemAvailable: 9415476 kB' 'Buffers: 2684 kB' 'Cached: 3127284 kB' 'SwapCached: 0 kB' 'Active: 497632 kB' 'Inactive: 2750328 kB' 'Active(anon): 128480 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119824 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88120 kB' 'Slab: 190876 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102756 kB' 'KernelStack: 6832 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 323024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.427 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.427 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.428 04:01:01 -- setup/common.sh@33 -- # echo 0 00:04:59.428 04:01:01 -- setup/common.sh@33 -- # return 0 00:04:59.428 04:01:01 -- setup/hugepages.sh@99 -- # surp=0 00:04:59.428 04:01:01 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:59.428 04:01:01 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:59.428 04:01:01 -- setup/common.sh@18 -- # local node= 00:04:59.428 04:01:01 -- setup/common.sh@19 -- # local var val 00:04:59.428 04:01:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.428 04:01:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.428 04:01:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.428 04:01:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.428 04:01:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.428 04:01:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6489528 kB' 'MemAvailable: 9415916 kB' 'Buffers: 2684 kB' 'Cached: 3127284 kB' 'SwapCached: 0 kB' 'Active: 497640 kB' 'Inactive: 2750328 kB' 'Active(anon): 128488 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119616 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88120 kB' 'Slab: 190876 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102756 kB' 'KernelStack: 6832 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 323024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.428 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.428 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.429 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.429 04:01:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.429 04:01:01 -- setup/common.sh@33 -- # echo 0 00:04:59.429 04:01:01 -- setup/common.sh@33 -- # return 0 00:04:59.429 04:01:01 -- setup/hugepages.sh@100 -- # resv=0 00:04:59.429 nr_hugepages=1024 00:04:59.429 04:01:01 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:59.429 resv_hugepages=0 00:04:59.429 04:01:01 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:59.429 surplus_hugepages=0 00:04:59.429 04:01:01 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:59.429 anon_hugepages=0 00:04:59.429 04:01:01 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:59.429 04:01:01 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.429 04:01:01 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:59.429 04:01:01 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:59.429 04:01:01 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:59.429 04:01:01 -- setup/common.sh@18 -- # local node= 00:04:59.429 04:01:01 -- setup/common.sh@19 -- # local var val 00:04:59.429 04:01:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.429 04:01:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.429 04:01:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.429 04:01:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.430 04:01:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.430 04:01:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6489528 kB' 'MemAvailable: 9415916 kB' 'Buffers: 2684 kB' 'Cached: 3127284 kB' 'SwapCached: 0 kB' 'Active: 497856 kB' 'Inactive: 2750328 kB' 'Active(anon): 128704 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119832 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88120 kB' 'Slab: 190876 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102756 kB' 'KernelStack: 6816 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 323024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.430 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.430 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.705 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.705 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.706 04:01:01 -- setup/common.sh@33 -- # echo 1024 00:04:59.706 04:01:01 -- setup/common.sh@33 -- # return 0 00:04:59.706 04:01:01 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.706 04:01:01 -- setup/hugepages.sh@112 -- # get_nodes 00:04:59.706 04:01:01 -- setup/hugepages.sh@27 -- # local node 00:04:59.706 04:01:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.706 04:01:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:59.706 04:01:01 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:59.706 04:01:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.706 04:01:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.706 04:01:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.706 04:01:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:59.706 04:01:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.706 04:01:01 -- setup/common.sh@18 -- # local node=0 00:04:59.706 04:01:01 -- setup/common.sh@19 -- # local var val 00:04:59.706 04:01:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.706 04:01:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.706 04:01:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:59.706 04:01:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:59.706 04:01:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.706 04:01:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6489276 kB' 'MemUsed: 5749832 kB' 'SwapCached: 0 kB' 'Active: 497632 kB' 'Inactive: 2750328 kB' 'Active(anon): 128480 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 3129968 kB' 'Mapped: 50900 kB' 'AnonPages: 119624 kB' 'Shmem: 10488 kB' 'KernelStack: 6832 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88120 kB' 'Slab: 190872 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 04:01:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 04:01:01 -- setup/common.sh@33 -- # echo 0 00:04:59.707 04:01:01 -- setup/common.sh@33 -- # return 0 00:04:59.707 04:01:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.707 04:01:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.707 04:01:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.707 04:01:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.707 node0=1024 expecting 1024 00:04:59.707 04:01:01 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:59.707 04:01:01 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:59.707 00:04:59.707 real 0m0.577s 00:04:59.707 user 0m0.306s 00:04:59.707 sys 0m0.308s 00:04:59.707 04:01:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:59.707 04:01:01 -- common/autotest_common.sh@10 -- # set +x 00:04:59.707 ************************************ 00:04:59.707 END TEST even_2G_alloc 00:04:59.707 ************************************ 00:04:59.707 04:01:01 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:59.707 04:01:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.707 04:01:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.707 04:01:01 -- common/autotest_common.sh@10 -- # set +x 00:04:59.707 ************************************ 00:04:59.707 START TEST odd_alloc 00:04:59.707 ************************************ 00:04:59.707 04:01:01 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:59.707 04:01:01 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:59.707 04:01:01 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:59.707 04:01:01 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:59.707 04:01:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:59.707 04:01:01 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:59.707 04:01:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:59.707 04:01:01 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:59.707 04:01:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:59.707 04:01:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:59.707 04:01:01 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:59.707 04:01:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:59.707 04:01:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:59.707 04:01:01 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:59.707 04:01:01 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:59.707 04:01:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:59.707 04:01:01 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:59.707 04:01:01 -- setup/hugepages.sh@83 -- # : 0 00:04:59.707 04:01:01 -- setup/hugepages.sh@84 -- # : 0 00:04:59.707 04:01:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:59.707 04:01:01 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:59.707 04:01:01 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:59.707 04:01:01 -- setup/hugepages.sh@160 -- # setup output 00:04:59.707 04:01:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.707 04:01:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:59.972 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:59.972 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:59.972 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:59.972 04:01:01 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:59.972 04:01:01 -- setup/hugepages.sh@89 -- # local node 00:04:59.972 04:01:01 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:59.972 04:01:01 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:59.972 04:01:01 -- setup/hugepages.sh@92 -- # local surp 00:04:59.972 04:01:01 -- setup/hugepages.sh@93 -- # local resv 00:04:59.972 04:01:01 -- setup/hugepages.sh@94 -- # local anon 00:04:59.972 04:01:01 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:59.972 04:01:01 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:59.972 04:01:01 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:59.972 04:01:01 -- setup/common.sh@18 -- # local node= 00:04:59.972 04:01:01 -- setup/common.sh@19 -- # local var val 00:04:59.972 04:01:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.972 04:01:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.972 04:01:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.972 04:01:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.972 04:01:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.972 04:01:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.972 04:01:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6493952 kB' 'MemAvailable: 9420340 kB' 'Buffers: 2684 kB' 'Cached: 3127284 kB' 'SwapCached: 0 kB' 'Active: 497884 kB' 'Inactive: 2750328 kB' 'Active(anon): 128732 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119816 kB' 'Mapped: 51024 kB' 'Shmem: 10488 kB' 'KReclaimable: 88120 kB' 'Slab: 190832 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102712 kB' 'KernelStack: 6792 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 323024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.972 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.972 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.973 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.973 04:01:01 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.973 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.973 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.973 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.973 04:01:01 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.973 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.973 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.973 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.973 04:01:01 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.973 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.973 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.973 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.973 04:01:01 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.973 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.973 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.973 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.973 04:01:01 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.973 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.973 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.973 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.973 04:01:01 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.973 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.973 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.973 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.973 04:01:01 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.973 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.973 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.973 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.973 04:01:01 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.973 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.973 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.973 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.973 04:01:01 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.973 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.973 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.973 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.973 04:01:01 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.973 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.973 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.973 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.973 04:01:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.973 04:01:01 -- setup/common.sh@32 -- # continue 00:04:59.973 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.973 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.245 04:01:01 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.245 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.245 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.245 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.245 04:01:01 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.245 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.245 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.245 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.245 04:01:01 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.245 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.245 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.245 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.245 04:01:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.245 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.245 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.245 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.245 04:01:01 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.245 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.245 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.245 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.246 04:01:01 -- setup/common.sh@33 -- # echo 0 00:05:00.246 04:01:01 -- setup/common.sh@33 -- # return 0 00:05:00.246 04:01:01 -- setup/hugepages.sh@97 -- # anon=0 00:05:00.246 04:01:01 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.246 04:01:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.246 04:01:01 -- setup/common.sh@18 -- # local node= 00:05:00.246 04:01:01 -- setup/common.sh@19 -- # local var val 00:05:00.246 04:01:01 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.246 04:01:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.246 04:01:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.246 04:01:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.246 04:01:01 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.246 04:01:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6493952 kB' 'MemAvailable: 9420340 kB' 'Buffers: 2684 kB' 'Cached: 3127284 kB' 'SwapCached: 0 kB' 'Active: 497444 kB' 'Inactive: 2750328 kB' 'Active(anon): 128292 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119352 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88120 kB' 'Slab: 190852 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102732 kB' 'KernelStack: 6832 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 322656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.246 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.246 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.247 04:01:01 -- setup/common.sh@33 -- # echo 0 00:05:00.247 04:01:01 -- setup/common.sh@33 -- # return 0 00:05:00.247 04:01:01 -- setup/hugepages.sh@99 -- # surp=0 00:05:00.247 04:01:01 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:00.247 04:01:01 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.247 04:01:01 -- setup/common.sh@18 -- # local node= 00:05:00.247 04:01:01 -- setup/common.sh@19 -- # local var val 00:05:00.247 04:01:01 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.247 04:01:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.247 04:01:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.247 04:01:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.247 04:01:01 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.247 04:01:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6493952 kB' 'MemAvailable: 9420340 kB' 'Buffers: 2684 kB' 'Cached: 3127284 kB' 'SwapCached: 0 kB' 'Active: 497600 kB' 'Inactive: 2750328 kB' 'Active(anon): 128448 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119544 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88120 kB' 'Slab: 190844 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102724 kB' 'KernelStack: 6816 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 323024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.247 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.247 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.248 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.248 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.249 04:01:01 -- setup/common.sh@33 -- # echo 0 00:05:00.249 04:01:01 -- setup/common.sh@33 -- # return 0 00:05:00.249 04:01:01 -- setup/hugepages.sh@100 -- # resv=0 00:05:00.249 nr_hugepages=1025 00:05:00.249 04:01:01 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:00.249 resv_hugepages=0 00:05:00.249 04:01:01 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:00.249 surplus_hugepages=0 00:05:00.249 04:01:01 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:00.249 anon_hugepages=0 00:05:00.249 04:01:01 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:00.249 04:01:01 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:00.249 04:01:01 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:00.249 04:01:01 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:00.249 04:01:01 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.249 04:01:01 -- setup/common.sh@18 -- # local node= 00:05:00.249 04:01:01 -- setup/common.sh@19 -- # local var val 00:05:00.249 04:01:01 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.249 04:01:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.249 04:01:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.249 04:01:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.249 04:01:01 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.249 04:01:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6493952 kB' 'MemAvailable: 9420340 kB' 'Buffers: 2684 kB' 'Cached: 3127284 kB' 'SwapCached: 0 kB' 'Active: 497576 kB' 'Inactive: 2750328 kB' 'Active(anon): 128424 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119580 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88120 kB' 'Slab: 190840 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102720 kB' 'KernelStack: 6832 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 323024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.249 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.249 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.250 04:01:01 -- setup/common.sh@33 -- # echo 1025 00:05:00.250 04:01:01 -- setup/common.sh@33 -- # return 0 00:05:00.250 04:01:01 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:00.250 04:01:01 -- setup/hugepages.sh@112 -- # get_nodes 00:05:00.250 04:01:01 -- setup/hugepages.sh@27 -- # local node 00:05:00.250 04:01:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.250 04:01:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:00.250 04:01:01 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:00.250 04:01:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.250 04:01:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.250 04:01:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.250 04:01:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:00.250 04:01:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.250 04:01:01 -- setup/common.sh@18 -- # local node=0 00:05:00.250 04:01:01 -- setup/common.sh@19 -- # local var val 00:05:00.250 04:01:01 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.250 04:01:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.250 04:01:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:00.250 04:01:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:00.250 04:01:01 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.250 04:01:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6493952 kB' 'MemUsed: 5745156 kB' 'SwapCached: 0 kB' 'Active: 497536 kB' 'Inactive: 2750328 kB' 'Active(anon): 128384 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3129968 kB' 'Mapped: 50900 kB' 'AnonPages: 119540 kB' 'Shmem: 10488 kB' 'KernelStack: 6816 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88120 kB' 'Slab: 190840 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102720 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.250 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.250 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # continue 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.251 04:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.251 04:01:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.251 04:01:01 -- setup/common.sh@33 -- # echo 0 00:05:00.251 04:01:01 -- setup/common.sh@33 -- # return 0 00:05:00.251 04:01:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.251 04:01:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.251 04:01:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.251 04:01:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.251 node0=1025 expecting 1025 00:05:00.251 04:01:01 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:00.251 04:01:01 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:00.251 00:05:00.251 real 0m0.569s 00:05:00.251 user 0m0.291s 00:05:00.251 sys 0m0.315s 00:05:00.251 04:01:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:00.251 04:01:01 -- common/autotest_common.sh@10 -- # set +x 00:05:00.251 ************************************ 00:05:00.251 END TEST odd_alloc 00:05:00.251 ************************************ 00:05:00.251 04:01:01 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:00.251 04:01:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.251 04:01:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.251 04:01:01 -- common/autotest_common.sh@10 -- # set +x 00:05:00.251 ************************************ 00:05:00.251 START TEST custom_alloc 00:05:00.251 ************************************ 00:05:00.251 04:01:01 -- common/autotest_common.sh@1114 -- # custom_alloc 00:05:00.251 04:01:01 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:00.251 04:01:01 -- setup/hugepages.sh@169 -- # local node 00:05:00.251 04:01:01 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:00.252 04:01:01 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:00.252 04:01:01 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:00.252 04:01:01 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:00.252 04:01:01 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:00.252 04:01:01 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:00.252 04:01:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:00.252 04:01:01 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:00.252 04:01:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:00.252 04:01:01 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:00.252 04:01:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:00.252 04:01:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:00.252 04:01:01 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:00.252 04:01:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:00.252 04:01:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:00.252 04:01:01 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:00.252 04:01:01 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:00.252 04:01:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:00.252 04:01:01 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:00.252 04:01:01 -- setup/hugepages.sh@83 -- # : 0 00:05:00.252 04:01:01 -- setup/hugepages.sh@84 -- # : 0 00:05:00.252 04:01:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:00.252 04:01:01 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:00.252 04:01:01 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:00.252 04:01:01 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:00.252 04:01:01 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:00.252 04:01:01 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:00.252 04:01:01 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:00.252 04:01:01 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:00.252 04:01:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:00.252 04:01:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:00.252 04:01:01 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:00.252 04:01:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:00.252 04:01:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:00.252 04:01:01 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:00.252 04:01:01 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:00.252 04:01:01 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:00.252 04:01:01 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:00.252 04:01:01 -- setup/hugepages.sh@78 -- # return 0 00:05:00.252 04:01:01 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:00.252 04:01:01 -- setup/hugepages.sh@187 -- # setup output 00:05:00.252 04:01:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.252 04:01:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:00.825 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:00.825 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:00.825 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:00.825 04:01:02 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:00.825 04:01:02 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:00.825 04:01:02 -- setup/hugepages.sh@89 -- # local node 00:05:00.825 04:01:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:00.825 04:01:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:00.825 04:01:02 -- setup/hugepages.sh@92 -- # local surp 00:05:00.825 04:01:02 -- setup/hugepages.sh@93 -- # local resv 00:05:00.825 04:01:02 -- setup/hugepages.sh@94 -- # local anon 00:05:00.825 04:01:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.825 04:01:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:00.825 04:01:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.825 04:01:02 -- setup/common.sh@18 -- # local node= 00:05:00.825 04:01:02 -- setup/common.sh@19 -- # local var val 00:05:00.825 04:01:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.825 04:01:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.825 04:01:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.825 04:01:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.825 04:01:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.825 04:01:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.825 04:01:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7542228 kB' 'MemAvailable: 10468616 kB' 'Buffers: 2684 kB' 'Cached: 3127284 kB' 'SwapCached: 0 kB' 'Active: 498448 kB' 'Inactive: 2750328 kB' 'Active(anon): 129296 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120120 kB' 'Mapped: 51012 kB' 'Shmem: 10488 kB' 'KReclaimable: 88120 kB' 'Slab: 190776 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102656 kB' 'KernelStack: 6792 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 323024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.825 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.825 04:01:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.826 04:01:02 -- setup/common.sh@33 -- # echo 0 00:05:00.826 04:01:02 -- setup/common.sh@33 -- # return 0 00:05:00.826 04:01:02 -- setup/hugepages.sh@97 -- # anon=0 00:05:00.826 04:01:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.826 04:01:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.826 04:01:02 -- setup/common.sh@18 -- # local node= 00:05:00.826 04:01:02 -- setup/common.sh@19 -- # local var val 00:05:00.826 04:01:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.826 04:01:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.826 04:01:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.826 04:01:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.826 04:01:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.826 04:01:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7548060 kB' 'MemAvailable: 10474448 kB' 'Buffers: 2684 kB' 'Cached: 3127284 kB' 'SwapCached: 0 kB' 'Active: 497700 kB' 'Inactive: 2750328 kB' 'Active(anon): 128548 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119648 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88120 kB' 'Slab: 190784 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102664 kB' 'KernelStack: 6832 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 323024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.826 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.826 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.827 04:01:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.827 04:01:02 -- setup/common.sh@33 -- # echo 0 00:05:00.827 04:01:02 -- setup/common.sh@33 -- # return 0 00:05:00.827 04:01:02 -- setup/hugepages.sh@99 -- # surp=0 00:05:00.827 04:01:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:00.827 04:01:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.827 04:01:02 -- setup/common.sh@18 -- # local node= 00:05:00.827 04:01:02 -- setup/common.sh@19 -- # local var val 00:05:00.827 04:01:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.827 04:01:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.827 04:01:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.827 04:01:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.827 04:01:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.827 04:01:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.827 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7550184 kB' 'MemAvailable: 10476572 kB' 'Buffers: 2684 kB' 'Cached: 3127284 kB' 'SwapCached: 0 kB' 'Active: 498148 kB' 'Inactive: 2750328 kB' 'Active(anon): 128996 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120176 kB' 'Mapped: 51160 kB' 'Shmem: 10488 kB' 'KReclaimable: 88120 kB' 'Slab: 190784 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102664 kB' 'KernelStack: 6880 kB' 'PageTables: 4596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 325848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.828 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.828 04:01:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.829 04:01:02 -- setup/common.sh@33 -- # echo 0 00:05:00.829 04:01:02 -- setup/common.sh@33 -- # return 0 00:05:00.829 04:01:02 -- setup/hugepages.sh@100 -- # resv=0 00:05:00.829 nr_hugepages=512 00:05:00.829 04:01:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:00.829 resv_hugepages=0 00:05:00.829 04:01:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:00.829 surplus_hugepages=0 00:05:00.829 anon_hugepages=0 00:05:00.829 04:01:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:00.829 04:01:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:00.829 04:01:02 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:00.829 04:01:02 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:00.829 04:01:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:00.829 04:01:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.829 04:01:02 -- setup/common.sh@18 -- # local node= 00:05:00.829 04:01:02 -- setup/common.sh@19 -- # local var val 00:05:00.829 04:01:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.829 04:01:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.829 04:01:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.829 04:01:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.829 04:01:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.829 04:01:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7550508 kB' 'MemAvailable: 10476896 kB' 'Buffers: 2684 kB' 'Cached: 3127284 kB' 'SwapCached: 0 kB' 'Active: 497464 kB' 'Inactive: 2750328 kB' 'Active(anon): 128312 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119524 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88120 kB' 'Slab: 190772 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102652 kB' 'KernelStack: 6816 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 323024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.829 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.829 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.830 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.830 04:01:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.830 04:01:02 -- setup/common.sh@33 -- # echo 512 00:05:00.830 04:01:02 -- setup/common.sh@33 -- # return 0 00:05:00.830 04:01:02 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:00.830 04:01:02 -- setup/hugepages.sh@112 -- # get_nodes 00:05:00.830 04:01:02 -- setup/hugepages.sh@27 -- # local node 00:05:00.830 04:01:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.830 04:01:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:00.830 04:01:02 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:00.830 04:01:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.830 04:01:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.830 04:01:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.830 04:01:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:00.830 04:01:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.831 04:01:02 -- setup/common.sh@18 -- # local node=0 00:05:00.831 04:01:02 -- setup/common.sh@19 -- # local var val 00:05:00.831 04:01:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.831 04:01:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.831 04:01:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:00.831 04:01:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:00.831 04:01:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.831 04:01:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7551208 kB' 'MemUsed: 4687900 kB' 'SwapCached: 0 kB' 'Active: 497828 kB' 'Inactive: 2750328 kB' 'Active(anon): 128676 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3129968 kB' 'Mapped: 50900 kB' 'AnonPages: 119804 kB' 'Shmem: 10488 kB' 'KernelStack: 6816 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88120 kB' 'Slab: 190760 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102640 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.831 04:01:02 -- setup/common.sh@32 -- # continue 00:05:00.831 04:01:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.832 04:01:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.832 04:01:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.832 04:01:02 -- setup/common.sh@33 -- # echo 0 00:05:00.832 04:01:02 -- setup/common.sh@33 -- # return 0 00:05:00.832 04:01:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.832 04:01:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.832 node0=512 expecting 512 00:05:00.832 ************************************ 00:05:00.832 END TEST custom_alloc 00:05:00.832 ************************************ 00:05:00.832 04:01:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.832 04:01:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.832 04:01:02 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:00.832 04:01:02 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:00.832 00:05:00.832 real 0m0.626s 00:05:00.832 user 0m0.300s 00:05:00.832 sys 0m0.346s 00:05:00.832 04:01:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:00.832 04:01:02 -- common/autotest_common.sh@10 -- # set +x 00:05:00.832 04:01:02 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:00.832 04:01:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.832 04:01:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.832 04:01:02 -- common/autotest_common.sh@10 -- # set +x 00:05:01.090 ************************************ 00:05:01.090 START TEST no_shrink_alloc 00:05:01.090 ************************************ 00:05:01.090 04:01:02 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:05:01.090 04:01:02 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:01.090 04:01:02 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:01.090 04:01:02 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:01.090 04:01:02 -- setup/hugepages.sh@51 -- # shift 00:05:01.090 04:01:02 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:01.090 04:01:02 -- setup/hugepages.sh@52 -- # local node_ids 00:05:01.090 04:01:02 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:01.090 04:01:02 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:01.090 04:01:02 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:01.090 04:01:02 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:01.090 04:01:02 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:01.090 04:01:02 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:01.090 04:01:02 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:01.090 04:01:02 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:01.090 04:01:02 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:01.090 04:01:02 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:01.090 04:01:02 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:01.090 04:01:02 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:01.090 04:01:02 -- setup/hugepages.sh@73 -- # return 0 00:05:01.090 04:01:02 -- setup/hugepages.sh@198 -- # setup output 00:05:01.090 04:01:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.090 04:01:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:01.351 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.351 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:01.351 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:01.351 04:01:03 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:01.351 04:01:03 -- setup/hugepages.sh@89 -- # local node 00:05:01.351 04:01:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:01.351 04:01:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:01.351 04:01:03 -- setup/hugepages.sh@92 -- # local surp 00:05:01.351 04:01:03 -- setup/hugepages.sh@93 -- # local resv 00:05:01.351 04:01:03 -- setup/hugepages.sh@94 -- # local anon 00:05:01.351 04:01:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:01.351 04:01:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:01.351 04:01:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:01.351 04:01:03 -- setup/common.sh@18 -- # local node= 00:05:01.351 04:01:03 -- setup/common.sh@19 -- # local var val 00:05:01.351 04:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.351 04:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.351 04:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.351 04:01:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.351 04:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.351 04:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.351 04:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6498628 kB' 'MemAvailable: 9425016 kB' 'Buffers: 2684 kB' 'Cached: 3127284 kB' 'SwapCached: 0 kB' 'Active: 498100 kB' 'Inactive: 2750328 kB' 'Active(anon): 128948 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120028 kB' 'Mapped: 50980 kB' 'Shmem: 10488 kB' 'KReclaimable: 88120 kB' 'Slab: 190764 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102644 kB' 'KernelStack: 6824 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 323224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.351 04:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.351 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.351 04:01:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.351 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.351 04:01:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.351 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.351 04:01:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.351 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.351 04:01:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.351 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.351 04:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.351 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.351 04:01:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.351 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.351 04:01:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.351 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.351 04:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.351 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.351 04:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.351 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.351 04:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.351 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.351 04:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.351 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.351 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.352 04:01:03 -- setup/common.sh@33 -- # echo 0 00:05:01.352 04:01:03 -- setup/common.sh@33 -- # return 0 00:05:01.352 04:01:03 -- setup/hugepages.sh@97 -- # anon=0 00:05:01.352 04:01:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:01.352 04:01:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.352 04:01:03 -- setup/common.sh@18 -- # local node= 00:05:01.352 04:01:03 -- setup/common.sh@19 -- # local var val 00:05:01.352 04:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.352 04:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.352 04:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.352 04:01:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.352 04:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.352 04:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6498628 kB' 'MemAvailable: 9425016 kB' 'Buffers: 2684 kB' 'Cached: 3127284 kB' 'SwapCached: 0 kB' 'Active: 498028 kB' 'Inactive: 2750328 kB' 'Active(anon): 128876 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119964 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88120 kB' 'Slab: 190772 kB' 'SReclaimable: 88120 kB' 'SUnreclaim: 102652 kB' 'KernelStack: 6832 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 323224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.352 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.352 04:01:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.353 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.353 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.354 04:01:03 -- setup/common.sh@33 -- # echo 0 00:05:01.354 04:01:03 -- setup/common.sh@33 -- # return 0 00:05:01.354 04:01:03 -- setup/hugepages.sh@99 -- # surp=0 00:05:01.354 04:01:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:01.354 04:01:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:01.354 04:01:03 -- setup/common.sh@18 -- # local node= 00:05:01.354 04:01:03 -- setup/common.sh@19 -- # local var val 00:05:01.354 04:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.354 04:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.354 04:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.354 04:01:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.354 04:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.354 04:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6500820 kB' 'MemAvailable: 9427200 kB' 'Buffers: 2684 kB' 'Cached: 3127284 kB' 'SwapCached: 0 kB' 'Active: 495104 kB' 'Inactive: 2750328 kB' 'Active(anon): 125952 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117012 kB' 'Mapped: 50052 kB' 'Shmem: 10488 kB' 'KReclaimable: 88100 kB' 'Slab: 190540 kB' 'SReclaimable: 88100 kB' 'SUnreclaim: 102440 kB' 'KernelStack: 6704 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 305248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.354 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.354 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.355 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.355 04:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.355 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.355 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.355 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.355 04:01:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.355 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.615 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.615 04:01:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.616 04:01:03 -- setup/common.sh@33 -- # echo 0 00:05:01.616 04:01:03 -- setup/common.sh@33 -- # return 0 00:05:01.616 04:01:03 -- setup/hugepages.sh@100 -- # resv=0 00:05:01.616 nr_hugepages=1024 00:05:01.616 04:01:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:01.616 resv_hugepages=0 00:05:01.616 04:01:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:01.616 surplus_hugepages=0 00:05:01.616 04:01:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:01.616 anon_hugepages=0 00:05:01.616 04:01:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:01.616 04:01:03 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.616 04:01:03 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:01.616 04:01:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:01.616 04:01:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:01.616 04:01:03 -- setup/common.sh@18 -- # local node= 00:05:01.616 04:01:03 -- setup/common.sh@19 -- # local var val 00:05:01.616 04:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.616 04:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.616 04:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.616 04:01:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.616 04:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.616 04:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6500820 kB' 'MemAvailable: 9427200 kB' 'Buffers: 2684 kB' 'Cached: 3127284 kB' 'SwapCached: 0 kB' 'Active: 495304 kB' 'Inactive: 2750328 kB' 'Active(anon): 126152 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117248 kB' 'Mapped: 50052 kB' 'Shmem: 10488 kB' 'KReclaimable: 88100 kB' 'Slab: 190540 kB' 'SReclaimable: 88100 kB' 'SUnreclaim: 102440 kB' 'KernelStack: 6704 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 305248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.616 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.616 04:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.617 04:01:03 -- setup/common.sh@33 -- # echo 1024 00:05:01.617 04:01:03 -- setup/common.sh@33 -- # return 0 00:05:01.617 04:01:03 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.617 04:01:03 -- setup/hugepages.sh@112 -- # get_nodes 00:05:01.617 04:01:03 -- setup/hugepages.sh@27 -- # local node 00:05:01.617 04:01:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.617 04:01:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:01.617 04:01:03 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:01.617 04:01:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:01.617 04:01:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.617 04:01:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.617 04:01:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:01.617 04:01:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.617 04:01:03 -- setup/common.sh@18 -- # local node=0 00:05:01.617 04:01:03 -- setup/common.sh@19 -- # local var val 00:05:01.617 04:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.617 04:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.617 04:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:01.617 04:01:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:01.617 04:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.617 04:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6500820 kB' 'MemUsed: 5738288 kB' 'SwapCached: 0 kB' 'Active: 495128 kB' 'Inactive: 2750328 kB' 'Active(anon): 125976 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3129968 kB' 'Mapped: 50052 kB' 'AnonPages: 117076 kB' 'Shmem: 10488 kB' 'KernelStack: 6704 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88100 kB' 'Slab: 190540 kB' 'SReclaimable: 88100 kB' 'SUnreclaim: 102440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.617 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.617 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.618 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.618 04:01:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.618 04:01:03 -- setup/common.sh@33 -- # echo 0 00:05:01.618 04:01:03 -- setup/common.sh@33 -- # return 0 00:05:01.618 04:01:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.618 04:01:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.618 04:01:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.618 04:01:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.618 node0=1024 expecting 1024 00:05:01.618 04:01:03 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:01.618 04:01:03 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:01.618 04:01:03 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:01.618 04:01:03 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:01.618 04:01:03 -- setup/hugepages.sh@202 -- # setup output 00:05:01.618 04:01:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.618 04:01:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:01.878 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.878 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:01.878 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:01.878 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:01.878 04:01:03 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:01.878 04:01:03 -- setup/hugepages.sh@89 -- # local node 00:05:01.878 04:01:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:01.878 04:01:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:01.878 04:01:03 -- setup/hugepages.sh@92 -- # local surp 00:05:01.878 04:01:03 -- setup/hugepages.sh@93 -- # local resv 00:05:01.878 04:01:03 -- setup/hugepages.sh@94 -- # local anon 00:05:01.878 04:01:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:01.878 04:01:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:01.878 04:01:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:01.878 04:01:03 -- setup/common.sh@18 -- # local node= 00:05:01.878 04:01:03 -- setup/common.sh@19 -- # local var val 00:05:01.878 04:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.878 04:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.878 04:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.878 04:01:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.878 04:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.878 04:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.878 04:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6503328 kB' 'MemAvailable: 9429708 kB' 'Buffers: 2684 kB' 'Cached: 3127284 kB' 'SwapCached: 0 kB' 'Active: 495548 kB' 'Inactive: 2750328 kB' 'Active(anon): 126396 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117824 kB' 'Mapped: 50172 kB' 'Shmem: 10488 kB' 'KReclaimable: 88100 kB' 'Slab: 190408 kB' 'SReclaimable: 88100 kB' 'SUnreclaim: 102308 kB' 'KernelStack: 6760 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 305248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.878 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.878 04:01:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.879 04:01:03 -- setup/common.sh@33 -- # echo 0 00:05:01.879 04:01:03 -- setup/common.sh@33 -- # return 0 00:05:01.879 04:01:03 -- setup/hugepages.sh@97 -- # anon=0 00:05:01.879 04:01:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:01.879 04:01:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.879 04:01:03 -- setup/common.sh@18 -- # local node= 00:05:01.879 04:01:03 -- setup/common.sh@19 -- # local var val 00:05:01.879 04:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.879 04:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.879 04:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.879 04:01:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.879 04:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.879 04:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6503880 kB' 'MemAvailable: 9430260 kB' 'Buffers: 2684 kB' 'Cached: 3127284 kB' 'SwapCached: 0 kB' 'Active: 495148 kB' 'Inactive: 2750328 kB' 'Active(anon): 125996 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116864 kB' 'Mapped: 50172 kB' 'Shmem: 10488 kB' 'KReclaimable: 88100 kB' 'Slab: 190380 kB' 'SReclaimable: 88100 kB' 'SUnreclaim: 102280 kB' 'KernelStack: 6680 kB' 'PageTables: 3664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 305248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.879 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.879 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.880 04:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.880 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.880 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.880 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.880 04:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.880 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.880 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.880 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.880 04:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.880 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.880 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.880 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.880 04:01:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.880 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.880 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.880 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.880 04:01:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.880 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.880 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.880 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.880 04:01:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.880 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.880 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.880 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.880 04:01:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.880 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.880 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.880 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.880 04:01:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.880 04:01:03 -- setup/common.sh@32 -- # continue 00:05:01.880 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.141 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.141 04:01:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.142 04:01:03 -- setup/common.sh@33 -- # echo 0 00:05:02.142 04:01:03 -- setup/common.sh@33 -- # return 0 00:05:02.142 04:01:03 -- setup/hugepages.sh@99 -- # surp=0 00:05:02.142 04:01:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.142 04:01:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.142 04:01:03 -- setup/common.sh@18 -- # local node= 00:05:02.142 04:01:03 -- setup/common.sh@19 -- # local var val 00:05:02.142 04:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.142 04:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.142 04:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.142 04:01:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.142 04:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.142 04:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6503880 kB' 'MemAvailable: 9430260 kB' 'Buffers: 2684 kB' 'Cached: 3127284 kB' 'SwapCached: 0 kB' 'Active: 495120 kB' 'Inactive: 2750328 kB' 'Active(anon): 125968 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117096 kB' 'Mapped: 50172 kB' 'Shmem: 10488 kB' 'KReclaimable: 88100 kB' 'Slab: 190380 kB' 'SReclaimable: 88100 kB' 'SUnreclaim: 102280 kB' 'KernelStack: 6680 kB' 'PageTables: 3660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 305248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.142 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.142 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.143 04:01:03 -- setup/common.sh@33 -- # echo 0 00:05:02.143 04:01:03 -- setup/common.sh@33 -- # return 0 00:05:02.143 04:01:03 -- setup/hugepages.sh@100 -- # resv=0 00:05:02.143 nr_hugepages=1024 00:05:02.143 04:01:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:02.143 resv_hugepages=0 00:05:02.143 04:01:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.143 surplus_hugepages=0 00:05:02.143 04:01:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.143 anon_hugepages=0 00:05:02.143 04:01:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.143 04:01:03 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.143 04:01:03 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:02.143 04:01:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.143 04:01:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.143 04:01:03 -- setup/common.sh@18 -- # local node= 00:05:02.143 04:01:03 -- setup/common.sh@19 -- # local var val 00:05:02.143 04:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.143 04:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.143 04:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.143 04:01:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.143 04:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.143 04:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6503880 kB' 'MemAvailable: 9430260 kB' 'Buffers: 2684 kB' 'Cached: 3127284 kB' 'SwapCached: 0 kB' 'Active: 495112 kB' 'Inactive: 2750328 kB' 'Active(anon): 125960 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117096 kB' 'Mapped: 50052 kB' 'Shmem: 10488 kB' 'KReclaimable: 88100 kB' 'Slab: 190384 kB' 'SReclaimable: 88100 kB' 'SUnreclaim: 102284 kB' 'KernelStack: 6704 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 305248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.143 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.143 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.144 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.144 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.145 04:01:03 -- setup/common.sh@33 -- # echo 1024 00:05:02.145 04:01:03 -- setup/common.sh@33 -- # return 0 00:05:02.145 04:01:03 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.145 04:01:03 -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.145 04:01:03 -- setup/hugepages.sh@27 -- # local node 00:05:02.145 04:01:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.145 04:01:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:02.145 04:01:03 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:02.145 04:01:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.145 04:01:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.145 04:01:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.145 04:01:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.145 04:01:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.145 04:01:03 -- setup/common.sh@18 -- # local node=0 00:05:02.145 04:01:03 -- setup/common.sh@19 -- # local var val 00:05:02.145 04:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.145 04:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.145 04:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.145 04:01:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.145 04:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.145 04:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6504268 kB' 'MemUsed: 5734840 kB' 'SwapCached: 0 kB' 'Active: 495168 kB' 'Inactive: 2750328 kB' 'Active(anon): 126016 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2750328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3129968 kB' 'Mapped: 50052 kB' 'AnonPages: 117128 kB' 'Shmem: 10488 kB' 'KernelStack: 6720 kB' 'PageTables: 3888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88100 kB' 'Slab: 190384 kB' 'SReclaimable: 88100 kB' 'SUnreclaim: 102284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.145 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.145 04:01:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.146 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.146 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.146 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.146 04:01:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.146 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.146 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.146 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.146 04:01:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.146 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.146 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.146 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.146 04:01:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.146 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.146 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.146 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.146 04:01:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.146 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.146 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.146 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.146 04:01:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.146 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.146 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.146 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.146 04:01:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.146 04:01:03 -- setup/common.sh@32 -- # continue 00:05:02.146 04:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.146 04:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.146 04:01:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.146 04:01:03 -- setup/common.sh@33 -- # echo 0 00:05:02.146 04:01:03 -- setup/common.sh@33 -- # return 0 00:05:02.146 04:01:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.146 04:01:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.146 04:01:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.146 04:01:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.146 node0=1024 expecting 1024 00:05:02.146 04:01:03 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:02.146 04:01:03 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:02.146 00:05:02.146 real 0m1.169s 00:05:02.146 user 0m0.565s 00:05:02.146 sys 0m0.667s 00:05:02.146 04:01:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:02.146 04:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:02.146 ************************************ 00:05:02.146 END TEST no_shrink_alloc 00:05:02.146 ************************************ 00:05:02.146 04:01:03 -- setup/hugepages.sh@217 -- # clear_hp 00:05:02.146 04:01:03 -- setup/hugepages.sh@37 -- # local node hp 00:05:02.146 04:01:03 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:02.146 04:01:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.146 04:01:03 -- setup/hugepages.sh@41 -- # echo 0 00:05:02.146 04:01:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.146 04:01:03 -- setup/hugepages.sh@41 -- # echo 0 00:05:02.146 04:01:03 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:02.146 04:01:03 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:02.146 00:05:02.146 real 0m5.137s 00:05:02.146 user 0m2.505s 00:05:02.146 sys 0m2.727s 00:05:02.146 04:01:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:02.146 04:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:02.146 ************************************ 00:05:02.146 END TEST hugepages 00:05:02.146 ************************************ 00:05:02.146 04:01:03 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:02.146 04:01:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.146 04:01:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.146 04:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:02.146 ************************************ 00:05:02.146 START TEST driver 00:05:02.146 ************************************ 00:05:02.146 04:01:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:02.405 * Looking for test storage... 00:05:02.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:02.405 04:01:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:02.405 04:01:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:02.405 04:01:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:02.405 04:01:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:02.405 04:01:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:02.405 04:01:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:02.405 04:01:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:02.405 04:01:04 -- scripts/common.sh@335 -- # IFS=.-: 00:05:02.405 04:01:04 -- scripts/common.sh@335 -- # read -ra ver1 00:05:02.405 04:01:04 -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.405 04:01:04 -- scripts/common.sh@336 -- # read -ra ver2 00:05:02.405 04:01:04 -- scripts/common.sh@337 -- # local 'op=<' 00:05:02.405 04:01:04 -- scripts/common.sh@339 -- # ver1_l=2 00:05:02.405 04:01:04 -- scripts/common.sh@340 -- # ver2_l=1 00:05:02.405 04:01:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:02.405 04:01:04 -- scripts/common.sh@343 -- # case "$op" in 00:05:02.405 04:01:04 -- scripts/common.sh@344 -- # : 1 00:05:02.405 04:01:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:02.405 04:01:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.405 04:01:04 -- scripts/common.sh@364 -- # decimal 1 00:05:02.405 04:01:04 -- scripts/common.sh@352 -- # local d=1 00:05:02.405 04:01:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.405 04:01:04 -- scripts/common.sh@354 -- # echo 1 00:05:02.405 04:01:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:02.405 04:01:04 -- scripts/common.sh@365 -- # decimal 2 00:05:02.405 04:01:04 -- scripts/common.sh@352 -- # local d=2 00:05:02.405 04:01:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.405 04:01:04 -- scripts/common.sh@354 -- # echo 2 00:05:02.405 04:01:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:02.405 04:01:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:02.405 04:01:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:02.405 04:01:04 -- scripts/common.sh@367 -- # return 0 00:05:02.405 04:01:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.405 04:01:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:02.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.405 --rc genhtml_branch_coverage=1 00:05:02.405 --rc genhtml_function_coverage=1 00:05:02.405 --rc genhtml_legend=1 00:05:02.405 --rc geninfo_all_blocks=1 00:05:02.405 --rc geninfo_unexecuted_blocks=1 00:05:02.405 00:05:02.405 ' 00:05:02.405 04:01:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:02.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.405 --rc genhtml_branch_coverage=1 00:05:02.405 --rc genhtml_function_coverage=1 00:05:02.405 --rc genhtml_legend=1 00:05:02.405 --rc geninfo_all_blocks=1 00:05:02.405 --rc geninfo_unexecuted_blocks=1 00:05:02.405 00:05:02.405 ' 00:05:02.405 04:01:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:02.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.405 --rc genhtml_branch_coverage=1 00:05:02.405 --rc genhtml_function_coverage=1 00:05:02.405 --rc genhtml_legend=1 00:05:02.405 --rc geninfo_all_blocks=1 00:05:02.405 --rc geninfo_unexecuted_blocks=1 00:05:02.405 00:05:02.405 ' 00:05:02.405 04:01:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:02.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.405 --rc genhtml_branch_coverage=1 00:05:02.405 --rc genhtml_function_coverage=1 00:05:02.405 --rc genhtml_legend=1 00:05:02.405 --rc geninfo_all_blocks=1 00:05:02.405 --rc geninfo_unexecuted_blocks=1 00:05:02.405 00:05:02.405 ' 00:05:02.405 04:01:04 -- setup/driver.sh@68 -- # setup reset 00:05:02.405 04:01:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:02.405 04:01:04 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:02.973 04:01:04 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:02.973 04:01:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.973 04:01:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.973 04:01:04 -- common/autotest_common.sh@10 -- # set +x 00:05:02.973 ************************************ 00:05:02.973 START TEST guess_driver 00:05:02.973 ************************************ 00:05:02.973 04:01:04 -- common/autotest_common.sh@1114 -- # guess_driver 00:05:02.973 04:01:04 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:02.973 04:01:04 -- setup/driver.sh@47 -- # local fail=0 00:05:02.973 04:01:04 -- setup/driver.sh@49 -- # pick_driver 00:05:02.973 04:01:04 -- setup/driver.sh@36 -- # vfio 00:05:02.973 04:01:04 -- setup/driver.sh@21 -- # local iommu_grups 00:05:02.973 04:01:04 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:02.973 04:01:04 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:02.973 04:01:04 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:02.973 04:01:04 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:02.973 04:01:04 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:02.973 04:01:04 -- setup/driver.sh@32 -- # return 1 00:05:02.973 04:01:04 -- setup/driver.sh@38 -- # uio 00:05:02.973 04:01:04 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:02.973 04:01:04 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:02.973 04:01:04 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:02.973 04:01:04 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:02.973 04:01:04 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:02.973 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:02.973 04:01:04 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:02.973 04:01:04 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:02.973 04:01:04 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:02.973 Looking for driver=uio_pci_generic 00:05:02.973 04:01:04 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:02.973 04:01:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.973 04:01:04 -- setup/driver.sh@45 -- # setup output config 00:05:02.973 04:01:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.973 04:01:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:03.910 04:01:05 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:03.910 04:01:05 -- setup/driver.sh@58 -- # continue 00:05:03.910 04:01:05 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.910 04:01:05 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.910 04:01:05 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:03.910 04:01:05 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.910 04:01:05 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.910 04:01:05 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:03.910 04:01:05 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.910 04:01:05 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:03.910 04:01:05 -- setup/driver.sh@65 -- # setup reset 00:05:03.910 04:01:05 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:03.910 04:01:05 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.477 00:05:04.477 real 0m1.570s 00:05:04.477 user 0m0.577s 00:05:04.477 sys 0m0.981s 00:05:04.477 04:01:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:04.477 04:01:06 -- common/autotest_common.sh@10 -- # set +x 00:05:04.477 ************************************ 00:05:04.477 END TEST guess_driver 00:05:04.477 ************************************ 00:05:04.736 00:05:04.736 real 0m2.406s 00:05:04.736 user 0m0.913s 00:05:04.736 sys 0m1.551s 00:05:04.736 04:01:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:04.736 04:01:06 -- common/autotest_common.sh@10 -- # set +x 00:05:04.736 ************************************ 00:05:04.736 END TEST driver 00:05:04.736 ************************************ 00:05:04.736 04:01:06 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:04.736 04:01:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:04.736 04:01:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.736 04:01:06 -- common/autotest_common.sh@10 -- # set +x 00:05:04.736 ************************************ 00:05:04.736 START TEST devices 00:05:04.736 ************************************ 00:05:04.736 04:01:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:04.736 * Looking for test storage... 00:05:04.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:04.736 04:01:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:04.736 04:01:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:04.736 04:01:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:04.736 04:01:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:04.736 04:01:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:04.736 04:01:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:04.736 04:01:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:04.736 04:01:06 -- scripts/common.sh@335 -- # IFS=.-: 00:05:04.736 04:01:06 -- scripts/common.sh@335 -- # read -ra ver1 00:05:04.736 04:01:06 -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.736 04:01:06 -- scripts/common.sh@336 -- # read -ra ver2 00:05:04.736 04:01:06 -- scripts/common.sh@337 -- # local 'op=<' 00:05:04.736 04:01:06 -- scripts/common.sh@339 -- # ver1_l=2 00:05:04.736 04:01:06 -- scripts/common.sh@340 -- # ver2_l=1 00:05:04.736 04:01:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:04.736 04:01:06 -- scripts/common.sh@343 -- # case "$op" in 00:05:04.736 04:01:06 -- scripts/common.sh@344 -- # : 1 00:05:04.736 04:01:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:04.736 04:01:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.736 04:01:06 -- scripts/common.sh@364 -- # decimal 1 00:05:04.736 04:01:06 -- scripts/common.sh@352 -- # local d=1 00:05:04.736 04:01:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.736 04:01:06 -- scripts/common.sh@354 -- # echo 1 00:05:04.736 04:01:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:04.995 04:01:06 -- scripts/common.sh@365 -- # decimal 2 00:05:04.995 04:01:06 -- scripts/common.sh@352 -- # local d=2 00:05:04.995 04:01:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.995 04:01:06 -- scripts/common.sh@354 -- # echo 2 00:05:04.995 04:01:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:04.995 04:01:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:04.995 04:01:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:04.995 04:01:06 -- scripts/common.sh@367 -- # return 0 00:05:04.995 04:01:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.995 04:01:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:04.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.995 --rc genhtml_branch_coverage=1 00:05:04.995 --rc genhtml_function_coverage=1 00:05:04.995 --rc genhtml_legend=1 00:05:04.995 --rc geninfo_all_blocks=1 00:05:04.995 --rc geninfo_unexecuted_blocks=1 00:05:04.995 00:05:04.995 ' 00:05:04.995 04:01:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:04.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.995 --rc genhtml_branch_coverage=1 00:05:04.995 --rc genhtml_function_coverage=1 00:05:04.995 --rc genhtml_legend=1 00:05:04.995 --rc geninfo_all_blocks=1 00:05:04.995 --rc geninfo_unexecuted_blocks=1 00:05:04.995 00:05:04.995 ' 00:05:04.995 04:01:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:04.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.995 --rc genhtml_branch_coverage=1 00:05:04.995 --rc genhtml_function_coverage=1 00:05:04.995 --rc genhtml_legend=1 00:05:04.995 --rc geninfo_all_blocks=1 00:05:04.995 --rc geninfo_unexecuted_blocks=1 00:05:04.995 00:05:04.995 ' 00:05:04.995 04:01:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:04.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.995 --rc genhtml_branch_coverage=1 00:05:04.995 --rc genhtml_function_coverage=1 00:05:04.995 --rc genhtml_legend=1 00:05:04.995 --rc geninfo_all_blocks=1 00:05:04.995 --rc geninfo_unexecuted_blocks=1 00:05:04.995 00:05:04.995 ' 00:05:04.995 04:01:06 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:04.995 04:01:06 -- setup/devices.sh@192 -- # setup reset 00:05:04.995 04:01:06 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:04.995 04:01:06 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:05.563 04:01:07 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:05.563 04:01:07 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:05.563 04:01:07 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:05.563 04:01:07 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:05.563 04:01:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:05.563 04:01:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:05.563 04:01:07 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:05.563 04:01:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:05.563 04:01:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:05.563 04:01:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:05.563 04:01:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:05.563 04:01:07 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:05.564 04:01:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:05.564 04:01:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:05.564 04:01:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:05.564 04:01:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:05.564 04:01:07 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:05.564 04:01:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:05.564 04:01:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:05.564 04:01:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:05.564 04:01:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:05.564 04:01:07 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:05.564 04:01:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:05.564 04:01:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:05.564 04:01:07 -- setup/devices.sh@196 -- # blocks=() 00:05:05.564 04:01:07 -- setup/devices.sh@196 -- # declare -a blocks 00:05:05.564 04:01:07 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:05.564 04:01:07 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:05.564 04:01:07 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:05.564 04:01:07 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:05.564 04:01:07 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:05.564 04:01:07 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:05.564 04:01:07 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:05.564 04:01:07 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:05.564 04:01:07 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:05.564 04:01:07 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:05.564 04:01:07 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:05.822 No valid GPT data, bailing 00:05:05.822 04:01:07 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:05.822 04:01:07 -- scripts/common.sh@393 -- # pt= 00:05:05.822 04:01:07 -- scripts/common.sh@394 -- # return 1 00:05:05.822 04:01:07 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:05.822 04:01:07 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:05.822 04:01:07 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:05.822 04:01:07 -- setup/common.sh@80 -- # echo 5368709120 00:05:05.822 04:01:07 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:05.822 04:01:07 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:05.822 04:01:07 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:05.822 04:01:07 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:05.822 04:01:07 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:05.822 04:01:07 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:05.822 04:01:07 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:05.822 04:01:07 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:05.822 04:01:07 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:05.822 04:01:07 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:05.822 04:01:07 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:05.822 No valid GPT data, bailing 00:05:05.822 04:01:07 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:05.822 04:01:07 -- scripts/common.sh@393 -- # pt= 00:05:05.822 04:01:07 -- scripts/common.sh@394 -- # return 1 00:05:05.822 04:01:07 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:05.822 04:01:07 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:05.822 04:01:07 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:05.822 04:01:07 -- setup/common.sh@80 -- # echo 4294967296 00:05:05.822 04:01:07 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:05.822 04:01:07 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:05.822 04:01:07 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:05.822 04:01:07 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:05.822 04:01:07 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:05.822 04:01:07 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:05.822 04:01:07 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:05.822 04:01:07 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:05.822 04:01:07 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:05.822 04:01:07 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:05.822 04:01:07 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:05.822 No valid GPT data, bailing 00:05:05.822 04:01:07 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:05.822 04:01:07 -- scripts/common.sh@393 -- # pt= 00:05:05.822 04:01:07 -- scripts/common.sh@394 -- # return 1 00:05:05.822 04:01:07 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:05.822 04:01:07 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:05.822 04:01:07 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:05.822 04:01:07 -- setup/common.sh@80 -- # echo 4294967296 00:05:05.822 04:01:07 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:05.822 04:01:07 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:05.822 04:01:07 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:05.822 04:01:07 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:05.822 04:01:07 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:05.822 04:01:07 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:05.822 04:01:07 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:05.822 04:01:07 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:05.822 04:01:07 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:05.822 04:01:07 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:05.822 04:01:07 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:06.081 No valid GPT data, bailing 00:05:06.081 04:01:07 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:06.081 04:01:07 -- scripts/common.sh@393 -- # pt= 00:05:06.081 04:01:07 -- scripts/common.sh@394 -- # return 1 00:05:06.081 04:01:07 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:06.081 04:01:07 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:06.081 04:01:07 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:06.081 04:01:07 -- setup/common.sh@80 -- # echo 4294967296 00:05:06.081 04:01:07 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:06.081 04:01:07 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:06.081 04:01:07 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:06.081 04:01:07 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:06.081 04:01:07 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:06.081 04:01:07 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:06.081 04:01:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.081 04:01:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.081 04:01:07 -- common/autotest_common.sh@10 -- # set +x 00:05:06.081 ************************************ 00:05:06.081 START TEST nvme_mount 00:05:06.081 ************************************ 00:05:06.081 04:01:07 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:06.081 04:01:07 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:06.081 04:01:07 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:06.081 04:01:07 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:06.081 04:01:07 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:06.081 04:01:07 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:06.081 04:01:07 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:06.081 04:01:07 -- setup/common.sh@40 -- # local part_no=1 00:05:06.081 04:01:07 -- setup/common.sh@41 -- # local size=1073741824 00:05:06.081 04:01:07 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:06.081 04:01:07 -- setup/common.sh@44 -- # parts=() 00:05:06.081 04:01:07 -- setup/common.sh@44 -- # local parts 00:05:06.081 04:01:07 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:06.081 04:01:07 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:06.081 04:01:07 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:06.081 04:01:07 -- setup/common.sh@46 -- # (( part++ )) 00:05:06.081 04:01:07 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:06.081 04:01:07 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:06.081 04:01:07 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:06.081 04:01:07 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:07.016 Creating new GPT entries in memory. 00:05:07.017 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:07.017 other utilities. 00:05:07.017 04:01:08 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:07.017 04:01:08 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:07.017 04:01:08 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:07.017 04:01:08 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:07.017 04:01:08 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:07.954 Creating new GPT entries in memory. 00:05:07.954 The operation has completed successfully. 00:05:07.954 04:01:09 -- setup/common.sh@57 -- # (( part++ )) 00:05:07.954 04:01:09 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:07.954 04:01:09 -- setup/common.sh@62 -- # wait 65852 00:05:08.213 04:01:09 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:08.213 04:01:09 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:08.213 04:01:09 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:08.213 04:01:09 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:08.213 04:01:09 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:08.213 04:01:09 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:08.213 04:01:09 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:08.213 04:01:09 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:08.213 04:01:09 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:08.213 04:01:09 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:08.213 04:01:09 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:08.213 04:01:09 -- setup/devices.sh@53 -- # local found=0 00:05:08.213 04:01:09 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:08.213 04:01:09 -- setup/devices.sh@56 -- # : 00:05:08.213 04:01:09 -- setup/devices.sh@59 -- # local pci status 00:05:08.213 04:01:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.213 04:01:09 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:08.213 04:01:09 -- setup/devices.sh@47 -- # setup output config 00:05:08.213 04:01:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.213 04:01:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:08.472 04:01:09 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:08.472 04:01:09 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:08.472 04:01:09 -- setup/devices.sh@63 -- # found=1 00:05:08.472 04:01:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.472 04:01:09 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:08.472 04:01:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.731 04:01:10 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:08.731 04:01:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.731 04:01:10 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:08.731 04:01:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.731 04:01:10 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:08.731 04:01:10 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:08.731 04:01:10 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:08.731 04:01:10 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:08.731 04:01:10 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:08.731 04:01:10 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:08.731 04:01:10 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:08.731 04:01:10 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:08.731 04:01:10 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.731 04:01:10 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:08.990 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:08.990 04:01:10 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:08.990 04:01:10 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:09.249 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:09.249 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:09.249 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:09.249 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:09.249 04:01:10 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:09.249 04:01:10 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:09.249 04:01:10 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:09.249 04:01:10 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:09.249 04:01:10 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:09.249 04:01:10 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:09.249 04:01:10 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:09.249 04:01:10 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:09.249 04:01:10 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:09.249 04:01:10 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:09.249 04:01:10 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:09.249 04:01:10 -- setup/devices.sh@53 -- # local found=0 00:05:09.249 04:01:10 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:09.249 04:01:10 -- setup/devices.sh@56 -- # : 00:05:09.249 04:01:10 -- setup/devices.sh@59 -- # local pci status 00:05:09.249 04:01:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.249 04:01:10 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:09.249 04:01:10 -- setup/devices.sh@47 -- # setup output config 00:05:09.249 04:01:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.249 04:01:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:09.249 04:01:11 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:09.249 04:01:11 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:09.249 04:01:11 -- setup/devices.sh@63 -- # found=1 00:05:09.249 04:01:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.249 04:01:11 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:09.249 04:01:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.816 04:01:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:09.816 04:01:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.816 04:01:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:09.816 04:01:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.816 04:01:11 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:09.816 04:01:11 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:09.816 04:01:11 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:09.816 04:01:11 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:09.816 04:01:11 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:09.816 04:01:11 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:09.816 04:01:11 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:09.816 04:01:11 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:09.816 04:01:11 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:09.816 04:01:11 -- setup/devices.sh@50 -- # local mount_point= 00:05:09.816 04:01:11 -- setup/devices.sh@51 -- # local test_file= 00:05:09.816 04:01:11 -- setup/devices.sh@53 -- # local found=0 00:05:09.816 04:01:11 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:09.816 04:01:11 -- setup/devices.sh@59 -- # local pci status 00:05:09.816 04:01:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.816 04:01:11 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:09.816 04:01:11 -- setup/devices.sh@47 -- # setup output config 00:05:09.816 04:01:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.817 04:01:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:10.075 04:01:11 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:10.075 04:01:11 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:10.075 04:01:11 -- setup/devices.sh@63 -- # found=1 00:05:10.075 04:01:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.075 04:01:11 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:10.075 04:01:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.642 04:01:12 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:10.642 04:01:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.642 04:01:12 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:10.642 04:01:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.642 04:01:12 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:10.642 04:01:12 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:10.642 04:01:12 -- setup/devices.sh@68 -- # return 0 00:05:10.642 04:01:12 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:10.642 04:01:12 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:10.642 04:01:12 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:10.642 04:01:12 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:10.642 04:01:12 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:10.642 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:10.642 00:05:10.642 real 0m4.691s 00:05:10.642 user 0m1.011s 00:05:10.642 sys 0m1.340s 00:05:10.642 04:01:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.642 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:10.642 ************************************ 00:05:10.642 END TEST nvme_mount 00:05:10.642 ************************************ 00:05:10.642 04:01:12 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:10.642 04:01:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.642 04:01:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.642 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:10.642 ************************************ 00:05:10.642 START TEST dm_mount 00:05:10.643 ************************************ 00:05:10.643 04:01:12 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:10.643 04:01:12 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:10.643 04:01:12 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:10.643 04:01:12 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:10.643 04:01:12 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:10.643 04:01:12 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:10.643 04:01:12 -- setup/common.sh@40 -- # local part_no=2 00:05:10.643 04:01:12 -- setup/common.sh@41 -- # local size=1073741824 00:05:10.643 04:01:12 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:10.643 04:01:12 -- setup/common.sh@44 -- # parts=() 00:05:10.643 04:01:12 -- setup/common.sh@44 -- # local parts 00:05:10.643 04:01:12 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:10.643 04:01:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:10.643 04:01:12 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:10.643 04:01:12 -- setup/common.sh@46 -- # (( part++ )) 00:05:10.643 04:01:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:10.643 04:01:12 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:10.643 04:01:12 -- setup/common.sh@46 -- # (( part++ )) 00:05:10.643 04:01:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:10.643 04:01:12 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:10.643 04:01:12 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:10.643 04:01:12 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:12.020 Creating new GPT entries in memory. 00:05:12.020 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:12.020 other utilities. 00:05:12.020 04:01:13 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:12.020 04:01:13 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:12.020 04:01:13 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:12.020 04:01:13 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:12.020 04:01:13 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:12.956 Creating new GPT entries in memory. 00:05:12.956 The operation has completed successfully. 00:05:12.956 04:01:14 -- setup/common.sh@57 -- # (( part++ )) 00:05:12.956 04:01:14 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:12.956 04:01:14 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:12.956 04:01:14 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:12.956 04:01:14 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:13.891 The operation has completed successfully. 00:05:13.891 04:01:15 -- setup/common.sh@57 -- # (( part++ )) 00:05:13.891 04:01:15 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:13.891 04:01:15 -- setup/common.sh@62 -- # wait 66311 00:05:13.891 04:01:15 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:13.891 04:01:15 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:13.891 04:01:15 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:13.891 04:01:15 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:13.891 04:01:15 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:13.891 04:01:15 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:13.891 04:01:15 -- setup/devices.sh@161 -- # break 00:05:13.891 04:01:15 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:13.891 04:01:15 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:13.891 04:01:15 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:13.891 04:01:15 -- setup/devices.sh@166 -- # dm=dm-0 00:05:13.891 04:01:15 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:13.891 04:01:15 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:13.892 04:01:15 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:13.892 04:01:15 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:13.892 04:01:15 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:13.892 04:01:15 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:13.892 04:01:15 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:13.892 04:01:15 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:13.892 04:01:15 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:13.892 04:01:15 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:13.892 04:01:15 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:13.892 04:01:15 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:13.892 04:01:15 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:13.892 04:01:15 -- setup/devices.sh@53 -- # local found=0 00:05:13.892 04:01:15 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:13.892 04:01:15 -- setup/devices.sh@56 -- # : 00:05:13.892 04:01:15 -- setup/devices.sh@59 -- # local pci status 00:05:13.892 04:01:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.892 04:01:15 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:13.892 04:01:15 -- setup/devices.sh@47 -- # setup output config 00:05:13.892 04:01:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.892 04:01:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:14.150 04:01:15 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:14.150 04:01:15 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:14.150 04:01:15 -- setup/devices.sh@63 -- # found=1 00:05:14.150 04:01:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.150 04:01:15 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:14.150 04:01:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.409 04:01:16 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:14.409 04:01:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.669 04:01:16 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:14.669 04:01:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.669 04:01:16 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:14.669 04:01:16 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:14.669 04:01:16 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:14.669 04:01:16 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:14.669 04:01:16 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:14.669 04:01:16 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:14.669 04:01:16 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:14.669 04:01:16 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:14.669 04:01:16 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:14.669 04:01:16 -- setup/devices.sh@50 -- # local mount_point= 00:05:14.669 04:01:16 -- setup/devices.sh@51 -- # local test_file= 00:05:14.669 04:01:16 -- setup/devices.sh@53 -- # local found=0 00:05:14.669 04:01:16 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:14.669 04:01:16 -- setup/devices.sh@59 -- # local pci status 00:05:14.669 04:01:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.669 04:01:16 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:14.669 04:01:16 -- setup/devices.sh@47 -- # setup output config 00:05:14.669 04:01:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.669 04:01:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:14.928 04:01:16 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:14.928 04:01:16 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:14.928 04:01:16 -- setup/devices.sh@63 -- # found=1 00:05:14.928 04:01:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.928 04:01:16 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:14.929 04:01:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.188 04:01:16 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:15.188 04:01:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.188 04:01:16 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:15.188 04:01:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.447 04:01:17 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.447 04:01:17 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:15.447 04:01:17 -- setup/devices.sh@68 -- # return 0 00:05:15.447 04:01:17 -- setup/devices.sh@187 -- # cleanup_dm 00:05:15.447 04:01:17 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:15.447 04:01:17 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:15.447 04:01:17 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:15.447 04:01:17 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:15.447 04:01:17 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:15.447 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:15.447 04:01:17 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:15.447 04:01:17 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:15.447 00:05:15.447 real 0m4.694s 00:05:15.447 user 0m0.721s 00:05:15.447 sys 0m0.885s 00:05:15.447 04:01:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:15.447 04:01:17 -- common/autotest_common.sh@10 -- # set +x 00:05:15.447 ************************************ 00:05:15.447 END TEST dm_mount 00:05:15.447 ************************************ 00:05:15.447 04:01:17 -- setup/devices.sh@1 -- # cleanup 00:05:15.447 04:01:17 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:15.447 04:01:17 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.447 04:01:17 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:15.447 04:01:17 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:15.447 04:01:17 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:15.447 04:01:17 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:15.706 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:15.706 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:15.706 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:15.706 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:15.706 04:01:17 -- setup/devices.sh@12 -- # cleanup_dm 00:05:15.706 04:01:17 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:15.706 04:01:17 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:15.706 04:01:17 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:15.706 04:01:17 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:15.706 04:01:17 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:15.706 04:01:17 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:15.706 00:05:15.706 real 0m11.106s 00:05:15.706 user 0m2.485s 00:05:15.706 sys 0m2.895s 00:05:15.706 04:01:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:15.706 04:01:17 -- common/autotest_common.sh@10 -- # set +x 00:05:15.706 ************************************ 00:05:15.706 END TEST devices 00:05:15.706 ************************************ 00:05:15.706 00:05:15.706 real 0m23.710s 00:05:15.706 user 0m8.102s 00:05:15.706 sys 0m10.001s 00:05:15.706 04:01:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:15.706 04:01:17 -- common/autotest_common.sh@10 -- # set +x 00:05:15.706 ************************************ 00:05:15.706 END TEST setup.sh 00:05:15.706 ************************************ 00:05:15.964 04:01:17 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:15.965 Hugepages 00:05:15.965 node hugesize free / total 00:05:15.965 node0 1048576kB 0 / 0 00:05:15.965 node0 2048kB 2048 / 2048 00:05:15.965 00:05:15.965 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:16.223 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:16.223 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:16.223 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:16.223 04:01:17 -- spdk/autotest.sh@128 -- # uname -s 00:05:16.223 04:01:17 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:16.223 04:01:17 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:16.223 04:01:17 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:17.159 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.159 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.159 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.159 04:01:18 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:18.094 04:01:19 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:18.094 04:01:19 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:18.094 04:01:19 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:18.094 04:01:19 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:18.094 04:01:19 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:18.094 04:01:19 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:18.094 04:01:19 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:18.094 04:01:19 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:18.094 04:01:19 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:18.352 04:01:19 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:18.352 04:01:19 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:18.352 04:01:19 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:18.611 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.611 Waiting for block devices as requested 00:05:18.611 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:18.870 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:18.870 04:01:20 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:18.870 04:01:20 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:18.870 04:01:20 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:18.870 04:01:20 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:18.870 04:01:20 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:18.870 04:01:20 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:18.870 04:01:20 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:18.870 04:01:20 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:18.870 04:01:20 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:18.870 04:01:20 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:18.870 04:01:20 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:18.870 04:01:20 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:18.870 04:01:20 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:18.870 04:01:20 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:18.870 04:01:20 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:18.870 04:01:20 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:18.870 04:01:20 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:18.870 04:01:20 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:18.870 04:01:20 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:18.870 04:01:20 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:18.870 04:01:20 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:18.870 04:01:20 -- common/autotest_common.sh@1552 -- # continue 00:05:18.870 04:01:20 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:18.870 04:01:20 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:18.870 04:01:20 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:18.870 04:01:20 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:18.870 04:01:20 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:18.870 04:01:20 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:18.870 04:01:20 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:18.870 04:01:20 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:18.870 04:01:20 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:18.870 04:01:20 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:18.870 04:01:20 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:18.870 04:01:20 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:18.870 04:01:20 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:18.870 04:01:20 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:18.870 04:01:20 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:18.870 04:01:20 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:18.870 04:01:20 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:18.870 04:01:20 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:18.870 04:01:20 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:18.870 04:01:20 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:18.870 04:01:20 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:18.870 04:01:20 -- common/autotest_common.sh@1552 -- # continue 00:05:18.870 04:01:20 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:18.870 04:01:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.870 04:01:20 -- common/autotest_common.sh@10 -- # set +x 00:05:18.870 04:01:20 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:18.870 04:01:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:18.870 04:01:20 -- common/autotest_common.sh@10 -- # set +x 00:05:18.870 04:01:20 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:19.805 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:19.805 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.805 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.805 04:01:21 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:19.805 04:01:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:19.805 04:01:21 -- common/autotest_common.sh@10 -- # set +x 00:05:20.063 04:01:21 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:20.063 04:01:21 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:20.063 04:01:21 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:20.063 04:01:21 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:20.063 04:01:21 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:20.063 04:01:21 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:20.063 04:01:21 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:20.063 04:01:21 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:20.064 04:01:21 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:20.064 04:01:21 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:20.064 04:01:21 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:20.064 04:01:21 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:20.064 04:01:21 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:20.064 04:01:21 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:20.064 04:01:21 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:20.064 04:01:21 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:20.064 04:01:21 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:20.064 04:01:21 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:20.064 04:01:21 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:20.064 04:01:21 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:20.064 04:01:21 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:20.064 04:01:21 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:20.064 04:01:21 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:20.064 04:01:21 -- common/autotest_common.sh@1588 -- # return 0 00:05:20.064 04:01:21 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:20.064 04:01:21 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:20.064 04:01:21 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:20.064 04:01:21 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:20.064 04:01:21 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:20.064 04:01:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:20.064 04:01:21 -- common/autotest_common.sh@10 -- # set +x 00:05:20.064 04:01:21 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:20.064 04:01:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.064 04:01:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.064 04:01:21 -- common/autotest_common.sh@10 -- # set +x 00:05:20.064 ************************************ 00:05:20.064 START TEST env 00:05:20.064 ************************************ 00:05:20.064 04:01:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:20.064 * Looking for test storage... 00:05:20.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:20.064 04:01:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:20.064 04:01:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:20.064 04:01:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:20.323 04:01:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:20.323 04:01:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:20.323 04:01:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:20.323 04:01:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:20.323 04:01:21 -- scripts/common.sh@335 -- # IFS=.-: 00:05:20.323 04:01:21 -- scripts/common.sh@335 -- # read -ra ver1 00:05:20.323 04:01:21 -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.323 04:01:21 -- scripts/common.sh@336 -- # read -ra ver2 00:05:20.323 04:01:21 -- scripts/common.sh@337 -- # local 'op=<' 00:05:20.323 04:01:21 -- scripts/common.sh@339 -- # ver1_l=2 00:05:20.323 04:01:21 -- scripts/common.sh@340 -- # ver2_l=1 00:05:20.323 04:01:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:20.323 04:01:21 -- scripts/common.sh@343 -- # case "$op" in 00:05:20.323 04:01:21 -- scripts/common.sh@344 -- # : 1 00:05:20.323 04:01:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:20.323 04:01:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.323 04:01:21 -- scripts/common.sh@364 -- # decimal 1 00:05:20.323 04:01:21 -- scripts/common.sh@352 -- # local d=1 00:05:20.323 04:01:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.323 04:01:21 -- scripts/common.sh@354 -- # echo 1 00:05:20.323 04:01:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:20.323 04:01:21 -- scripts/common.sh@365 -- # decimal 2 00:05:20.323 04:01:21 -- scripts/common.sh@352 -- # local d=2 00:05:20.323 04:01:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.323 04:01:21 -- scripts/common.sh@354 -- # echo 2 00:05:20.323 04:01:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:20.323 04:01:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:20.323 04:01:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:20.323 04:01:21 -- scripts/common.sh@367 -- # return 0 00:05:20.323 04:01:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.323 04:01:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:20.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.323 --rc genhtml_branch_coverage=1 00:05:20.323 --rc genhtml_function_coverage=1 00:05:20.323 --rc genhtml_legend=1 00:05:20.323 --rc geninfo_all_blocks=1 00:05:20.323 --rc geninfo_unexecuted_blocks=1 00:05:20.323 00:05:20.323 ' 00:05:20.323 04:01:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:20.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.323 --rc genhtml_branch_coverage=1 00:05:20.323 --rc genhtml_function_coverage=1 00:05:20.323 --rc genhtml_legend=1 00:05:20.323 --rc geninfo_all_blocks=1 00:05:20.323 --rc geninfo_unexecuted_blocks=1 00:05:20.323 00:05:20.323 ' 00:05:20.323 04:01:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:20.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.323 --rc genhtml_branch_coverage=1 00:05:20.323 --rc genhtml_function_coverage=1 00:05:20.323 --rc genhtml_legend=1 00:05:20.323 --rc geninfo_all_blocks=1 00:05:20.323 --rc geninfo_unexecuted_blocks=1 00:05:20.323 00:05:20.323 ' 00:05:20.323 04:01:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:20.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.323 --rc genhtml_branch_coverage=1 00:05:20.323 --rc genhtml_function_coverage=1 00:05:20.323 --rc genhtml_legend=1 00:05:20.323 --rc geninfo_all_blocks=1 00:05:20.323 --rc geninfo_unexecuted_blocks=1 00:05:20.323 00:05:20.323 ' 00:05:20.323 04:01:21 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:20.323 04:01:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.323 04:01:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.323 04:01:21 -- common/autotest_common.sh@10 -- # set +x 00:05:20.323 ************************************ 00:05:20.323 START TEST env_memory 00:05:20.323 ************************************ 00:05:20.323 04:01:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:20.323 00:05:20.323 00:05:20.323 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.323 http://cunit.sourceforge.net/ 00:05:20.323 00:05:20.323 00:05:20.323 Suite: memory 00:05:20.323 Test: alloc and free memory map ...[2024-11-26 04:01:21.935958] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:20.323 passed 00:05:20.323 Test: mem map translation ...[2024-11-26 04:01:21.967453] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:20.323 [2024-11-26 04:01:21.967497] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:20.323 [2024-11-26 04:01:21.967552] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:20.323 [2024-11-26 04:01:21.967563] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:20.323 passed 00:05:20.323 Test: mem map registration ...[2024-11-26 04:01:22.031783] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:20.323 [2024-11-26 04:01:22.031829] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:20.323 passed 00:05:20.582 Test: mem map adjacent registrations ...passed 00:05:20.582 00:05:20.582 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.582 suites 1 1 n/a 0 0 00:05:20.582 tests 4 4 4 0 0 00:05:20.582 asserts 152 152 152 0 n/a 00:05:20.582 00:05:20.582 Elapsed time = 0.213 seconds 00:05:20.582 00:05:20.582 real 0m0.232s 00:05:20.582 user 0m0.214s 00:05:20.582 sys 0m0.011s 00:05:20.583 04:01:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.583 04:01:22 -- common/autotest_common.sh@10 -- # set +x 00:05:20.583 ************************************ 00:05:20.583 END TEST env_memory 00:05:20.583 ************************************ 00:05:20.583 04:01:22 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:20.583 04:01:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.583 04:01:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.583 04:01:22 -- common/autotest_common.sh@10 -- # set +x 00:05:20.583 ************************************ 00:05:20.583 START TEST env_vtophys 00:05:20.583 ************************************ 00:05:20.583 04:01:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:20.583 EAL: lib.eal log level changed from notice to debug 00:05:20.583 EAL: Detected lcore 0 as core 0 on socket 0 00:05:20.583 EAL: Detected lcore 1 as core 0 on socket 0 00:05:20.583 EAL: Detected lcore 2 as core 0 on socket 0 00:05:20.583 EAL: Detected lcore 3 as core 0 on socket 0 00:05:20.583 EAL: Detected lcore 4 as core 0 on socket 0 00:05:20.583 EAL: Detected lcore 5 as core 0 on socket 0 00:05:20.583 EAL: Detected lcore 6 as core 0 on socket 0 00:05:20.583 EAL: Detected lcore 7 as core 0 on socket 0 00:05:20.583 EAL: Detected lcore 8 as core 0 on socket 0 00:05:20.583 EAL: Detected lcore 9 as core 0 on socket 0 00:05:20.583 EAL: Maximum logical cores by configuration: 128 00:05:20.583 EAL: Detected CPU lcores: 10 00:05:20.583 EAL: Detected NUMA nodes: 1 00:05:20.583 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:20.583 EAL: Detected shared linkage of DPDK 00:05:20.583 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:20.583 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:20.583 EAL: Registered [vdev] bus. 00:05:20.583 EAL: bus.vdev log level changed from disabled to notice 00:05:20.583 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:20.583 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:20.583 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:20.583 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:20.583 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:20.583 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:20.583 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:20.583 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:20.583 EAL: No shared files mode enabled, IPC will be disabled 00:05:20.583 EAL: No shared files mode enabled, IPC is disabled 00:05:20.583 EAL: Selected IOVA mode 'PA' 00:05:20.583 EAL: Probing VFIO support... 00:05:20.583 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:20.583 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:20.583 EAL: Ask a virtual area of 0x2e000 bytes 00:05:20.583 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:20.583 EAL: Setting up physically contiguous memory... 00:05:20.583 EAL: Setting maximum number of open files to 524288 00:05:20.583 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:20.583 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:20.583 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.583 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:20.583 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.583 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.583 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:20.583 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:20.583 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.583 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:20.583 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.583 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.583 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:20.583 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:20.583 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.583 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:20.583 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.583 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.583 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:20.583 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:20.583 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.583 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:20.583 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.583 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.583 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:20.583 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:20.583 EAL: Hugepages will be freed exactly as allocated. 00:05:20.583 EAL: No shared files mode enabled, IPC is disabled 00:05:20.583 EAL: No shared files mode enabled, IPC is disabled 00:05:20.583 EAL: TSC frequency is ~2200000 KHz 00:05:20.583 EAL: Main lcore 0 is ready (tid=7fd962c1da00;cpuset=[0]) 00:05:20.583 EAL: Trying to obtain current memory policy. 00:05:20.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.583 EAL: Restoring previous memory policy: 0 00:05:20.583 EAL: request: mp_malloc_sync 00:05:20.583 EAL: No shared files mode enabled, IPC is disabled 00:05:20.583 EAL: Heap on socket 0 was expanded by 2MB 00:05:20.583 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:20.583 EAL: No shared files mode enabled, IPC is disabled 00:05:20.583 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:20.583 EAL: Mem event callback 'spdk:(nil)' registered 00:05:20.583 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:20.583 00:05:20.583 00:05:20.583 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.583 http://cunit.sourceforge.net/ 00:05:20.583 00:05:20.583 00:05:20.583 Suite: components_suite 00:05:20.583 Test: vtophys_malloc_test ...passed 00:05:20.583 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:20.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.583 EAL: Restoring previous memory policy: 4 00:05:20.583 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.583 EAL: request: mp_malloc_sync 00:05:20.583 EAL: No shared files mode enabled, IPC is disabled 00:05:20.583 EAL: Heap on socket 0 was expanded by 4MB 00:05:20.583 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.583 EAL: request: mp_malloc_sync 00:05:20.583 EAL: No shared files mode enabled, IPC is disabled 00:05:20.583 EAL: Heap on socket 0 was shrunk by 4MB 00:05:20.583 EAL: Trying to obtain current memory policy. 00:05:20.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.583 EAL: Restoring previous memory policy: 4 00:05:20.583 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.583 EAL: request: mp_malloc_sync 00:05:20.583 EAL: No shared files mode enabled, IPC is disabled 00:05:20.583 EAL: Heap on socket 0 was expanded by 6MB 00:05:20.583 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.583 EAL: request: mp_malloc_sync 00:05:20.583 EAL: No shared files mode enabled, IPC is disabled 00:05:20.583 EAL: Heap on socket 0 was shrunk by 6MB 00:05:20.583 EAL: Trying to obtain current memory policy. 00:05:20.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.583 EAL: Restoring previous memory policy: 4 00:05:20.583 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.583 EAL: request: mp_malloc_sync 00:05:20.583 EAL: No shared files mode enabled, IPC is disabled 00:05:20.583 EAL: Heap on socket 0 was expanded by 10MB 00:05:20.583 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.583 EAL: request: mp_malloc_sync 00:05:20.583 EAL: No shared files mode enabled, IPC is disabled 00:05:20.583 EAL: Heap on socket 0 was shrunk by 10MB 00:05:20.583 EAL: Trying to obtain current memory policy. 00:05:20.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.842 EAL: Restoring previous memory policy: 4 00:05:20.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.842 EAL: request: mp_malloc_sync 00:05:20.842 EAL: No shared files mode enabled, IPC is disabled 00:05:20.842 EAL: Heap on socket 0 was expanded by 18MB 00:05:20.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.842 EAL: request: mp_malloc_sync 00:05:20.842 EAL: No shared files mode enabled, IPC is disabled 00:05:20.842 EAL: Heap on socket 0 was shrunk by 18MB 00:05:20.842 EAL: Trying to obtain current memory policy. 00:05:20.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.842 EAL: Restoring previous memory policy: 4 00:05:20.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.842 EAL: request: mp_malloc_sync 00:05:20.842 EAL: No shared files mode enabled, IPC is disabled 00:05:20.842 EAL: Heap on socket 0 was expanded by 34MB 00:05:20.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.842 EAL: request: mp_malloc_sync 00:05:20.842 EAL: No shared files mode enabled, IPC is disabled 00:05:20.842 EAL: Heap on socket 0 was shrunk by 34MB 00:05:20.842 EAL: Trying to obtain current memory policy. 00:05:20.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.842 EAL: Restoring previous memory policy: 4 00:05:20.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.842 EAL: request: mp_malloc_sync 00:05:20.842 EAL: No shared files mode enabled, IPC is disabled 00:05:20.842 EAL: Heap on socket 0 was expanded by 66MB 00:05:20.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.842 EAL: request: mp_malloc_sync 00:05:20.842 EAL: No shared files mode enabled, IPC is disabled 00:05:20.842 EAL: Heap on socket 0 was shrunk by 66MB 00:05:20.842 EAL: Trying to obtain current memory policy. 00:05:20.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.842 EAL: Restoring previous memory policy: 4 00:05:20.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.842 EAL: request: mp_malloc_sync 00:05:20.842 EAL: No shared files mode enabled, IPC is disabled 00:05:20.842 EAL: Heap on socket 0 was expanded by 130MB 00:05:20.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.842 EAL: request: mp_malloc_sync 00:05:20.842 EAL: No shared files mode enabled, IPC is disabled 00:05:20.842 EAL: Heap on socket 0 was shrunk by 130MB 00:05:20.842 EAL: Trying to obtain current memory policy. 00:05:20.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.842 EAL: Restoring previous memory policy: 4 00:05:20.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.842 EAL: request: mp_malloc_sync 00:05:20.842 EAL: No shared files mode enabled, IPC is disabled 00:05:20.842 EAL: Heap on socket 0 was expanded by 258MB 00:05:21.101 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.101 EAL: request: mp_malloc_sync 00:05:21.101 EAL: No shared files mode enabled, IPC is disabled 00:05:21.101 EAL: Heap on socket 0 was shrunk by 258MB 00:05:21.101 EAL: Trying to obtain current memory policy. 00:05:21.101 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.101 EAL: Restoring previous memory policy: 4 00:05:21.101 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.101 EAL: request: mp_malloc_sync 00:05:21.101 EAL: No shared files mode enabled, IPC is disabled 00:05:21.101 EAL: Heap on socket 0 was expanded by 514MB 00:05:21.389 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.389 EAL: request: mp_malloc_sync 00:05:21.389 EAL: No shared files mode enabled, IPC is disabled 00:05:21.389 EAL: Heap on socket 0 was shrunk by 514MB 00:05:21.389 EAL: Trying to obtain current memory policy. 00:05:21.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.658 EAL: Restoring previous memory policy: 4 00:05:21.658 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.658 EAL: request: mp_malloc_sync 00:05:21.658 EAL: No shared files mode enabled, IPC is disabled 00:05:21.658 EAL: Heap on socket 0 was expanded by 1026MB 00:05:21.658 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.917 passed 00:05:21.917 00:05:21.917 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.917 suites 1 1 n/a 0 0 00:05:21.917 tests 2 2 2 0 0 00:05:21.917 asserts 5302 5302 5302 0 n/a 00:05:21.917 00:05:21.917 Elapsed time = 1.250 seconds 00:05:21.917 EAL: request: mp_malloc_sync 00:05:21.917 EAL: No shared files mode enabled, IPC is disabled 00:05:21.917 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:21.917 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.917 EAL: request: mp_malloc_sync 00:05:21.917 EAL: No shared files mode enabled, IPC is disabled 00:05:21.917 EAL: Heap on socket 0 was shrunk by 2MB 00:05:21.917 EAL: No shared files mode enabled, IPC is disabled 00:05:21.917 EAL: No shared files mode enabled, IPC is disabled 00:05:21.917 EAL: No shared files mode enabled, IPC is disabled 00:05:21.917 00:05:21.917 real 0m1.450s 00:05:21.917 user 0m0.803s 00:05:21.917 sys 0m0.512s 00:05:21.917 04:01:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.917 04:01:23 -- common/autotest_common.sh@10 -- # set +x 00:05:21.917 ************************************ 00:05:21.918 END TEST env_vtophys 00:05:21.918 ************************************ 00:05:21.918 04:01:23 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:21.918 04:01:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.918 04:01:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.918 04:01:23 -- common/autotest_common.sh@10 -- # set +x 00:05:21.918 ************************************ 00:05:21.918 START TEST env_pci 00:05:21.918 ************************************ 00:05:21.918 04:01:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:22.176 00:05:22.176 00:05:22.176 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.176 http://cunit.sourceforge.net/ 00:05:22.176 00:05:22.176 00:05:22.176 Suite: pci 00:05:22.176 Test: pci_hook ...[2024-11-26 04:01:23.685523] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 67455 has claimed it 00:05:22.176 passed 00:05:22.176 00:05:22.176 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.176 suites 1 1 n/a 0 0 00:05:22.176 tests 1 1 1 0 0 00:05:22.176 asserts 25 25 25 0 n/a 00:05:22.176 00:05:22.176 Elapsed time = 0.002 seconds 00:05:22.176 EAL: Cannot find device (10000:00:01.0) 00:05:22.176 EAL: Failed to attach device on primary process 00:05:22.176 00:05:22.176 real 0m0.022s 00:05:22.176 user 0m0.008s 00:05:22.176 sys 0m0.013s 00:05:22.176 04:01:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.176 04:01:23 -- common/autotest_common.sh@10 -- # set +x 00:05:22.176 ************************************ 00:05:22.176 END TEST env_pci 00:05:22.176 ************************************ 00:05:22.176 04:01:23 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:22.176 04:01:23 -- env/env.sh@15 -- # uname 00:05:22.176 04:01:23 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:22.176 04:01:23 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:22.176 04:01:23 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:22.176 04:01:23 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:22.176 04:01:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.176 04:01:23 -- common/autotest_common.sh@10 -- # set +x 00:05:22.176 ************************************ 00:05:22.176 START TEST env_dpdk_post_init 00:05:22.176 ************************************ 00:05:22.177 04:01:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:22.177 EAL: Detected CPU lcores: 10 00:05:22.177 EAL: Detected NUMA nodes: 1 00:05:22.177 EAL: Detected shared linkage of DPDK 00:05:22.177 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:22.177 EAL: Selected IOVA mode 'PA' 00:05:22.177 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:22.177 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:22.177 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:22.177 Starting DPDK initialization... 00:05:22.177 Starting SPDK post initialization... 00:05:22.177 SPDK NVMe probe 00:05:22.177 Attaching to 0000:00:06.0 00:05:22.177 Attaching to 0000:00:07.0 00:05:22.177 Attached to 0000:00:06.0 00:05:22.177 Attached to 0000:00:07.0 00:05:22.177 Cleaning up... 00:05:22.177 00:05:22.177 real 0m0.179s 00:05:22.177 user 0m0.044s 00:05:22.177 sys 0m0.035s 00:05:22.177 04:01:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.177 04:01:23 -- common/autotest_common.sh@10 -- # set +x 00:05:22.177 ************************************ 00:05:22.177 END TEST env_dpdk_post_init 00:05:22.177 ************************************ 00:05:22.435 04:01:23 -- env/env.sh@26 -- # uname 00:05:22.435 04:01:23 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:22.435 04:01:23 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:22.435 04:01:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.435 04:01:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.435 04:01:23 -- common/autotest_common.sh@10 -- # set +x 00:05:22.435 ************************************ 00:05:22.435 START TEST env_mem_callbacks 00:05:22.435 ************************************ 00:05:22.435 04:01:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:22.435 EAL: Detected CPU lcores: 10 00:05:22.435 EAL: Detected NUMA nodes: 1 00:05:22.435 EAL: Detected shared linkage of DPDK 00:05:22.435 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:22.435 EAL: Selected IOVA mode 'PA' 00:05:22.435 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:22.435 00:05:22.435 00:05:22.435 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.435 http://cunit.sourceforge.net/ 00:05:22.435 00:05:22.435 00:05:22.435 Suite: memory 00:05:22.435 Test: test ... 00:05:22.435 register 0x200000200000 2097152 00:05:22.435 malloc 3145728 00:05:22.435 register 0x200000400000 4194304 00:05:22.435 buf 0x200000500000 len 3145728 PASSED 00:05:22.435 malloc 64 00:05:22.435 buf 0x2000004fff40 len 64 PASSED 00:05:22.435 malloc 4194304 00:05:22.435 register 0x200000800000 6291456 00:05:22.435 buf 0x200000a00000 len 4194304 PASSED 00:05:22.435 free 0x200000500000 3145728 00:05:22.435 free 0x2000004fff40 64 00:05:22.435 unregister 0x200000400000 4194304 PASSED 00:05:22.435 free 0x200000a00000 4194304 00:05:22.435 unregister 0x200000800000 6291456 PASSED 00:05:22.435 malloc 8388608 00:05:22.435 register 0x200000400000 10485760 00:05:22.435 buf 0x200000600000 len 8388608 PASSED 00:05:22.435 free 0x200000600000 8388608 00:05:22.435 unregister 0x200000400000 10485760 PASSED 00:05:22.435 passed 00:05:22.435 00:05:22.435 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.435 suites 1 1 n/a 0 0 00:05:22.435 tests 1 1 1 0 0 00:05:22.435 asserts 15 15 15 0 n/a 00:05:22.435 00:05:22.435 Elapsed time = 0.010 seconds 00:05:22.435 00:05:22.435 real 0m0.147s 00:05:22.435 user 0m0.016s 00:05:22.435 sys 0m0.030s 00:05:22.435 04:01:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.435 04:01:24 -- common/autotest_common.sh@10 -- # set +x 00:05:22.435 ************************************ 00:05:22.435 END TEST env_mem_callbacks 00:05:22.435 ************************************ 00:05:22.435 00:05:22.435 real 0m2.494s 00:05:22.435 user 0m1.280s 00:05:22.435 sys 0m0.862s 00:05:22.435 04:01:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.435 04:01:24 -- common/autotest_common.sh@10 -- # set +x 00:05:22.435 ************************************ 00:05:22.435 END TEST env 00:05:22.435 ************************************ 00:05:22.695 04:01:24 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:22.695 04:01:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.695 04:01:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.695 04:01:24 -- common/autotest_common.sh@10 -- # set +x 00:05:22.695 ************************************ 00:05:22.695 START TEST rpc 00:05:22.695 ************************************ 00:05:22.695 04:01:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:22.695 * Looking for test storage... 00:05:22.695 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:22.695 04:01:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:22.695 04:01:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:22.695 04:01:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:22.695 04:01:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:22.695 04:01:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:22.695 04:01:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:22.695 04:01:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:22.695 04:01:24 -- scripts/common.sh@335 -- # IFS=.-: 00:05:22.695 04:01:24 -- scripts/common.sh@335 -- # read -ra ver1 00:05:22.695 04:01:24 -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.695 04:01:24 -- scripts/common.sh@336 -- # read -ra ver2 00:05:22.695 04:01:24 -- scripts/common.sh@337 -- # local 'op=<' 00:05:22.695 04:01:24 -- scripts/common.sh@339 -- # ver1_l=2 00:05:22.695 04:01:24 -- scripts/common.sh@340 -- # ver2_l=1 00:05:22.695 04:01:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:22.695 04:01:24 -- scripts/common.sh@343 -- # case "$op" in 00:05:22.695 04:01:24 -- scripts/common.sh@344 -- # : 1 00:05:22.695 04:01:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:22.695 04:01:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.695 04:01:24 -- scripts/common.sh@364 -- # decimal 1 00:05:22.695 04:01:24 -- scripts/common.sh@352 -- # local d=1 00:05:22.695 04:01:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.695 04:01:24 -- scripts/common.sh@354 -- # echo 1 00:05:22.695 04:01:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:22.695 04:01:24 -- scripts/common.sh@365 -- # decimal 2 00:05:22.695 04:01:24 -- scripts/common.sh@352 -- # local d=2 00:05:22.695 04:01:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.695 04:01:24 -- scripts/common.sh@354 -- # echo 2 00:05:22.695 04:01:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:22.695 04:01:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:22.695 04:01:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:22.695 04:01:24 -- scripts/common.sh@367 -- # return 0 00:05:22.695 04:01:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.695 04:01:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:22.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.695 --rc genhtml_branch_coverage=1 00:05:22.695 --rc genhtml_function_coverage=1 00:05:22.695 --rc genhtml_legend=1 00:05:22.695 --rc geninfo_all_blocks=1 00:05:22.695 --rc geninfo_unexecuted_blocks=1 00:05:22.695 00:05:22.695 ' 00:05:22.695 04:01:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:22.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.695 --rc genhtml_branch_coverage=1 00:05:22.695 --rc genhtml_function_coverage=1 00:05:22.695 --rc genhtml_legend=1 00:05:22.695 --rc geninfo_all_blocks=1 00:05:22.695 --rc geninfo_unexecuted_blocks=1 00:05:22.695 00:05:22.695 ' 00:05:22.695 04:01:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:22.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.695 --rc genhtml_branch_coverage=1 00:05:22.695 --rc genhtml_function_coverage=1 00:05:22.695 --rc genhtml_legend=1 00:05:22.695 --rc geninfo_all_blocks=1 00:05:22.695 --rc geninfo_unexecuted_blocks=1 00:05:22.695 00:05:22.695 ' 00:05:22.695 04:01:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:22.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.695 --rc genhtml_branch_coverage=1 00:05:22.695 --rc genhtml_function_coverage=1 00:05:22.695 --rc genhtml_legend=1 00:05:22.695 --rc geninfo_all_blocks=1 00:05:22.695 --rc geninfo_unexecuted_blocks=1 00:05:22.695 00:05:22.695 ' 00:05:22.695 04:01:24 -- rpc/rpc.sh@65 -- # spdk_pid=67572 00:05:22.695 04:01:24 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.695 04:01:24 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:22.695 04:01:24 -- rpc/rpc.sh@67 -- # waitforlisten 67572 00:05:22.695 04:01:24 -- common/autotest_common.sh@829 -- # '[' -z 67572 ']' 00:05:22.695 04:01:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.695 04:01:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.695 04:01:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.695 04:01:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.695 04:01:24 -- common/autotest_common.sh@10 -- # set +x 00:05:22.955 [2024-11-26 04:01:24.484114] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:22.955 [2024-11-26 04:01:24.484223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67572 ] 00:05:22.955 [2024-11-26 04:01:24.624306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.955 [2024-11-26 04:01:24.703840] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:22.955 [2024-11-26 04:01:24.704013] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:22.955 [2024-11-26 04:01:24.704031] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 67572' to capture a snapshot of events at runtime. 00:05:22.955 [2024-11-26 04:01:24.704043] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid67572 for offline analysis/debug. 00:05:22.955 [2024-11-26 04:01:24.704074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.890 04:01:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.890 04:01:25 -- common/autotest_common.sh@862 -- # return 0 00:05:23.890 04:01:25 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:23.890 04:01:25 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:23.890 04:01:25 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:23.890 04:01:25 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:23.890 04:01:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.890 04:01:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.890 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:23.890 ************************************ 00:05:23.890 START TEST rpc_integrity 00:05:23.890 ************************************ 00:05:23.890 04:01:25 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:23.890 04:01:25 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:23.890 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.890 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:23.890 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.890 04:01:25 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:23.890 04:01:25 -- rpc/rpc.sh@13 -- # jq length 00:05:23.890 04:01:25 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:23.890 04:01:25 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:23.890 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.890 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:23.890 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.890 04:01:25 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:23.890 04:01:25 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:23.890 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.890 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:23.890 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.890 04:01:25 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:23.890 { 00:05:23.890 "aliases": [ 00:05:23.890 "b3bf12dd-f22e-4a52-87a5-98c382c8224b" 00:05:23.890 ], 00:05:23.890 "assigned_rate_limits": { 00:05:23.890 "r_mbytes_per_sec": 0, 00:05:23.890 "rw_ios_per_sec": 0, 00:05:23.890 "rw_mbytes_per_sec": 0, 00:05:23.890 "w_mbytes_per_sec": 0 00:05:23.890 }, 00:05:23.890 "block_size": 512, 00:05:23.890 "claimed": false, 00:05:23.890 "driver_specific": {}, 00:05:23.890 "memory_domains": [ 00:05:23.890 { 00:05:23.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.890 "dma_device_type": 2 00:05:23.890 } 00:05:23.890 ], 00:05:23.890 "name": "Malloc0", 00:05:23.890 "num_blocks": 16384, 00:05:23.890 "product_name": "Malloc disk", 00:05:23.890 "supported_io_types": { 00:05:23.890 "abort": true, 00:05:23.890 "compare": false, 00:05:23.890 "compare_and_write": false, 00:05:23.890 "flush": true, 00:05:23.890 "nvme_admin": false, 00:05:23.890 "nvme_io": false, 00:05:23.890 "read": true, 00:05:23.890 "reset": true, 00:05:23.890 "unmap": true, 00:05:23.890 "write": true, 00:05:23.890 "write_zeroes": true 00:05:23.890 }, 00:05:23.890 "uuid": "b3bf12dd-f22e-4a52-87a5-98c382c8224b", 00:05:23.890 "zoned": false 00:05:23.890 } 00:05:23.890 ]' 00:05:23.890 04:01:25 -- rpc/rpc.sh@17 -- # jq length 00:05:24.149 04:01:25 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:24.149 04:01:25 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:24.149 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.149 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:24.149 [2024-11-26 04:01:25.672560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:24.149 [2024-11-26 04:01:25.672638] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:24.149 [2024-11-26 04:01:25.672654] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x731b60 00:05:24.149 [2024-11-26 04:01:25.672662] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:24.149 [2024-11-26 04:01:25.674127] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:24.149 [2024-11-26 04:01:25.674197] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:24.149 Passthru0 00:05:24.149 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.149 04:01:25 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:24.149 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.149 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:24.149 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.149 04:01:25 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:24.149 { 00:05:24.149 "aliases": [ 00:05:24.149 "b3bf12dd-f22e-4a52-87a5-98c382c8224b" 00:05:24.149 ], 00:05:24.149 "assigned_rate_limits": { 00:05:24.149 "r_mbytes_per_sec": 0, 00:05:24.149 "rw_ios_per_sec": 0, 00:05:24.149 "rw_mbytes_per_sec": 0, 00:05:24.149 "w_mbytes_per_sec": 0 00:05:24.149 }, 00:05:24.149 "block_size": 512, 00:05:24.149 "claim_type": "exclusive_write", 00:05:24.149 "claimed": true, 00:05:24.149 "driver_specific": {}, 00:05:24.149 "memory_domains": [ 00:05:24.149 { 00:05:24.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.149 "dma_device_type": 2 00:05:24.149 } 00:05:24.149 ], 00:05:24.149 "name": "Malloc0", 00:05:24.149 "num_blocks": 16384, 00:05:24.149 "product_name": "Malloc disk", 00:05:24.149 "supported_io_types": { 00:05:24.149 "abort": true, 00:05:24.149 "compare": false, 00:05:24.149 "compare_and_write": false, 00:05:24.149 "flush": true, 00:05:24.149 "nvme_admin": false, 00:05:24.149 "nvme_io": false, 00:05:24.149 "read": true, 00:05:24.149 "reset": true, 00:05:24.149 "unmap": true, 00:05:24.149 "write": true, 00:05:24.149 "write_zeroes": true 00:05:24.149 }, 00:05:24.149 "uuid": "b3bf12dd-f22e-4a52-87a5-98c382c8224b", 00:05:24.149 "zoned": false 00:05:24.149 }, 00:05:24.149 { 00:05:24.149 "aliases": [ 00:05:24.149 "91348a73-efd8-5c2c-879c-65ec9364f607" 00:05:24.149 ], 00:05:24.149 "assigned_rate_limits": { 00:05:24.149 "r_mbytes_per_sec": 0, 00:05:24.149 "rw_ios_per_sec": 0, 00:05:24.149 "rw_mbytes_per_sec": 0, 00:05:24.149 "w_mbytes_per_sec": 0 00:05:24.149 }, 00:05:24.149 "block_size": 512, 00:05:24.149 "claimed": false, 00:05:24.149 "driver_specific": { 00:05:24.149 "passthru": { 00:05:24.149 "base_bdev_name": "Malloc0", 00:05:24.149 "name": "Passthru0" 00:05:24.149 } 00:05:24.149 }, 00:05:24.149 "memory_domains": [ 00:05:24.149 { 00:05:24.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.149 "dma_device_type": 2 00:05:24.149 } 00:05:24.149 ], 00:05:24.149 "name": "Passthru0", 00:05:24.149 "num_blocks": 16384, 00:05:24.149 "product_name": "passthru", 00:05:24.149 "supported_io_types": { 00:05:24.149 "abort": true, 00:05:24.149 "compare": false, 00:05:24.149 "compare_and_write": false, 00:05:24.149 "flush": true, 00:05:24.149 "nvme_admin": false, 00:05:24.149 "nvme_io": false, 00:05:24.149 "read": true, 00:05:24.149 "reset": true, 00:05:24.149 "unmap": true, 00:05:24.149 "write": true, 00:05:24.149 "write_zeroes": true 00:05:24.149 }, 00:05:24.149 "uuid": "91348a73-efd8-5c2c-879c-65ec9364f607", 00:05:24.150 "zoned": false 00:05:24.150 } 00:05:24.150 ]' 00:05:24.150 04:01:25 -- rpc/rpc.sh@21 -- # jq length 00:05:24.150 04:01:25 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:24.150 04:01:25 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:24.150 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.150 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:24.150 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.150 04:01:25 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:24.150 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.150 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:24.150 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.150 04:01:25 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:24.150 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.150 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:24.150 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.150 04:01:25 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:24.150 04:01:25 -- rpc/rpc.sh@26 -- # jq length 00:05:24.150 04:01:25 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:24.150 00:05:24.150 real 0m0.324s 00:05:24.150 user 0m0.212s 00:05:24.150 sys 0m0.035s 00:05:24.150 04:01:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:24.150 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:24.150 ************************************ 00:05:24.150 END TEST rpc_integrity 00:05:24.150 ************************************ 00:05:24.150 04:01:25 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:24.150 04:01:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.150 04:01:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.150 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:24.150 ************************************ 00:05:24.150 START TEST rpc_plugins 00:05:24.150 ************************************ 00:05:24.150 04:01:25 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:24.150 04:01:25 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:24.150 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.150 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:24.150 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.150 04:01:25 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:24.150 04:01:25 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:24.150 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.150 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:24.409 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.409 04:01:25 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:24.409 { 00:05:24.409 "aliases": [ 00:05:24.409 "942b2881-5e8a-435d-b17c-6ac9e50fcb23" 00:05:24.409 ], 00:05:24.409 "assigned_rate_limits": { 00:05:24.409 "r_mbytes_per_sec": 0, 00:05:24.409 "rw_ios_per_sec": 0, 00:05:24.409 "rw_mbytes_per_sec": 0, 00:05:24.409 "w_mbytes_per_sec": 0 00:05:24.409 }, 00:05:24.409 "block_size": 4096, 00:05:24.409 "claimed": false, 00:05:24.409 "driver_specific": {}, 00:05:24.409 "memory_domains": [ 00:05:24.409 { 00:05:24.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.409 "dma_device_type": 2 00:05:24.409 } 00:05:24.409 ], 00:05:24.409 "name": "Malloc1", 00:05:24.409 "num_blocks": 256, 00:05:24.409 "product_name": "Malloc disk", 00:05:24.409 "supported_io_types": { 00:05:24.409 "abort": true, 00:05:24.409 "compare": false, 00:05:24.409 "compare_and_write": false, 00:05:24.409 "flush": true, 00:05:24.409 "nvme_admin": false, 00:05:24.409 "nvme_io": false, 00:05:24.409 "read": true, 00:05:24.409 "reset": true, 00:05:24.409 "unmap": true, 00:05:24.409 "write": true, 00:05:24.409 "write_zeroes": true 00:05:24.409 }, 00:05:24.409 "uuid": "942b2881-5e8a-435d-b17c-6ac9e50fcb23", 00:05:24.409 "zoned": false 00:05:24.409 } 00:05:24.409 ]' 00:05:24.409 04:01:25 -- rpc/rpc.sh@32 -- # jq length 00:05:24.409 04:01:25 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:24.409 04:01:25 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:24.409 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.409 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:24.409 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.409 04:01:25 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:24.409 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.409 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:24.409 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.409 04:01:25 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:24.409 04:01:25 -- rpc/rpc.sh@36 -- # jq length 00:05:24.409 04:01:26 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:24.409 00:05:24.409 real 0m0.156s 00:05:24.409 user 0m0.099s 00:05:24.409 sys 0m0.021s 00:05:24.409 04:01:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:24.409 ************************************ 00:05:24.409 END TEST rpc_plugins 00:05:24.409 04:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:24.409 ************************************ 00:05:24.409 04:01:26 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:24.409 04:01:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.409 04:01:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.409 04:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:24.409 ************************************ 00:05:24.409 START TEST rpc_trace_cmd_test 00:05:24.409 ************************************ 00:05:24.409 04:01:26 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:24.409 04:01:26 -- rpc/rpc.sh@40 -- # local info 00:05:24.409 04:01:26 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:24.409 04:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.409 04:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:24.409 04:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.409 04:01:26 -- rpc/rpc.sh@42 -- # info='{ 00:05:24.409 "bdev": { 00:05:24.409 "mask": "0x8", 00:05:24.409 "tpoint_mask": "0xffffffffffffffff" 00:05:24.409 }, 00:05:24.409 "bdev_nvme": { 00:05:24.409 "mask": "0x4000", 00:05:24.409 "tpoint_mask": "0x0" 00:05:24.409 }, 00:05:24.409 "blobfs": { 00:05:24.409 "mask": "0x80", 00:05:24.409 "tpoint_mask": "0x0" 00:05:24.409 }, 00:05:24.409 "dsa": { 00:05:24.409 "mask": "0x200", 00:05:24.409 "tpoint_mask": "0x0" 00:05:24.409 }, 00:05:24.409 "ftl": { 00:05:24.409 "mask": "0x40", 00:05:24.409 "tpoint_mask": "0x0" 00:05:24.409 }, 00:05:24.409 "iaa": { 00:05:24.409 "mask": "0x1000", 00:05:24.409 "tpoint_mask": "0x0" 00:05:24.409 }, 00:05:24.409 "iscsi_conn": { 00:05:24.409 "mask": "0x2", 00:05:24.409 "tpoint_mask": "0x0" 00:05:24.409 }, 00:05:24.409 "nvme_pcie": { 00:05:24.409 "mask": "0x800", 00:05:24.409 "tpoint_mask": "0x0" 00:05:24.409 }, 00:05:24.409 "nvme_tcp": { 00:05:24.409 "mask": "0x2000", 00:05:24.409 "tpoint_mask": "0x0" 00:05:24.409 }, 00:05:24.409 "nvmf_rdma": { 00:05:24.409 "mask": "0x10", 00:05:24.409 "tpoint_mask": "0x0" 00:05:24.409 }, 00:05:24.409 "nvmf_tcp": { 00:05:24.409 "mask": "0x20", 00:05:24.409 "tpoint_mask": "0x0" 00:05:24.409 }, 00:05:24.409 "scsi": { 00:05:24.409 "mask": "0x4", 00:05:24.409 "tpoint_mask": "0x0" 00:05:24.409 }, 00:05:24.409 "thread": { 00:05:24.409 "mask": "0x400", 00:05:24.409 "tpoint_mask": "0x0" 00:05:24.409 }, 00:05:24.409 "tpoint_group_mask": "0x8", 00:05:24.409 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid67572" 00:05:24.409 }' 00:05:24.409 04:01:26 -- rpc/rpc.sh@43 -- # jq length 00:05:24.409 04:01:26 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:24.409 04:01:26 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:24.669 04:01:26 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:24.669 04:01:26 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:24.669 04:01:26 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:24.669 04:01:26 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:24.669 04:01:26 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:24.669 04:01:26 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:24.669 04:01:26 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:24.669 00:05:24.669 real 0m0.282s 00:05:24.669 user 0m0.243s 00:05:24.669 sys 0m0.028s 00:05:24.669 04:01:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:24.669 04:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:24.669 ************************************ 00:05:24.669 END TEST rpc_trace_cmd_test 00:05:24.669 ************************************ 00:05:24.669 04:01:26 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:24.669 04:01:26 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:24.669 04:01:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.669 04:01:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.669 04:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:24.669 ************************************ 00:05:24.669 START TEST go_rpc 00:05:24.669 ************************************ 00:05:24.669 04:01:26 -- common/autotest_common.sh@1114 -- # go_rpc 00:05:24.669 04:01:26 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:24.928 04:01:26 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:24.928 04:01:26 -- rpc/rpc.sh@52 -- # jq length 00:05:24.928 04:01:26 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:24.928 04:01:26 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:24.928 04:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.928 04:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:24.928 04:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.928 04:01:26 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:24.928 04:01:26 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:24.928 04:01:26 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["e27fd064-28a0-4714-9735-731c5cd1c62c"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"e27fd064-28a0-4714-9735-731c5cd1c62c","zoned":false}]' 00:05:24.928 04:01:26 -- rpc/rpc.sh@57 -- # jq length 00:05:24.928 04:01:26 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:24.928 04:01:26 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:24.928 04:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.928 04:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:24.928 04:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.928 04:01:26 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:24.928 04:01:26 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:24.928 04:01:26 -- rpc/rpc.sh@61 -- # jq length 00:05:24.928 04:01:26 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:24.928 00:05:24.928 real 0m0.222s 00:05:24.928 user 0m0.148s 00:05:24.928 sys 0m0.040s 00:05:24.928 04:01:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:24.928 04:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:24.928 ************************************ 00:05:24.928 END TEST go_rpc 00:05:24.928 ************************************ 00:05:25.188 04:01:26 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:25.188 04:01:26 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:25.188 04:01:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:25.188 04:01:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.188 04:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:25.188 ************************************ 00:05:25.188 START TEST rpc_daemon_integrity 00:05:25.188 ************************************ 00:05:25.188 04:01:26 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:25.188 04:01:26 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:25.188 04:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.188 04:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:25.188 04:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.188 04:01:26 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:25.188 04:01:26 -- rpc/rpc.sh@13 -- # jq length 00:05:25.188 04:01:26 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:25.188 04:01:26 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:25.188 04:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.188 04:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:25.188 04:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.188 04:01:26 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:25.188 04:01:26 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:25.188 04:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.188 04:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:25.188 04:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.188 04:01:26 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:25.188 { 00:05:25.188 "aliases": [ 00:05:25.188 "2eb373c7-f601-4a76-9443-01a3f4d73ce0" 00:05:25.188 ], 00:05:25.188 "assigned_rate_limits": { 00:05:25.188 "r_mbytes_per_sec": 0, 00:05:25.188 "rw_ios_per_sec": 0, 00:05:25.188 "rw_mbytes_per_sec": 0, 00:05:25.188 "w_mbytes_per_sec": 0 00:05:25.188 }, 00:05:25.188 "block_size": 512, 00:05:25.188 "claimed": false, 00:05:25.188 "driver_specific": {}, 00:05:25.188 "memory_domains": [ 00:05:25.188 { 00:05:25.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.188 "dma_device_type": 2 00:05:25.188 } 00:05:25.188 ], 00:05:25.188 "name": "Malloc3", 00:05:25.188 "num_blocks": 16384, 00:05:25.188 "product_name": "Malloc disk", 00:05:25.188 "supported_io_types": { 00:05:25.188 "abort": true, 00:05:25.188 "compare": false, 00:05:25.188 "compare_and_write": false, 00:05:25.188 "flush": true, 00:05:25.188 "nvme_admin": false, 00:05:25.188 "nvme_io": false, 00:05:25.188 "read": true, 00:05:25.188 "reset": true, 00:05:25.188 "unmap": true, 00:05:25.188 "write": true, 00:05:25.188 "write_zeroes": true 00:05:25.188 }, 00:05:25.188 "uuid": "2eb373c7-f601-4a76-9443-01a3f4d73ce0", 00:05:25.188 "zoned": false 00:05:25.188 } 00:05:25.188 ]' 00:05:25.188 04:01:26 -- rpc/rpc.sh@17 -- # jq length 00:05:25.188 04:01:26 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:25.188 04:01:26 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:25.188 04:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.188 04:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:25.188 [2024-11-26 04:01:26.848970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:25.188 [2024-11-26 04:01:26.849026] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:25.188 [2024-11-26 04:01:26.849042] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x733990 00:05:25.188 [2024-11-26 04:01:26.849050] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:25.188 [2024-11-26 04:01:26.850285] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:25.188 [2024-11-26 04:01:26.850332] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:25.188 Passthru0 00:05:25.188 04:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.188 04:01:26 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:25.188 04:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.188 04:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:25.188 04:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.188 04:01:26 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:25.188 { 00:05:25.188 "aliases": [ 00:05:25.188 "2eb373c7-f601-4a76-9443-01a3f4d73ce0" 00:05:25.188 ], 00:05:25.188 "assigned_rate_limits": { 00:05:25.188 "r_mbytes_per_sec": 0, 00:05:25.188 "rw_ios_per_sec": 0, 00:05:25.188 "rw_mbytes_per_sec": 0, 00:05:25.188 "w_mbytes_per_sec": 0 00:05:25.188 }, 00:05:25.188 "block_size": 512, 00:05:25.188 "claim_type": "exclusive_write", 00:05:25.188 "claimed": true, 00:05:25.188 "driver_specific": {}, 00:05:25.188 "memory_domains": [ 00:05:25.188 { 00:05:25.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.188 "dma_device_type": 2 00:05:25.188 } 00:05:25.188 ], 00:05:25.188 "name": "Malloc3", 00:05:25.188 "num_blocks": 16384, 00:05:25.188 "product_name": "Malloc disk", 00:05:25.188 "supported_io_types": { 00:05:25.188 "abort": true, 00:05:25.188 "compare": false, 00:05:25.188 "compare_and_write": false, 00:05:25.188 "flush": true, 00:05:25.188 "nvme_admin": false, 00:05:25.188 "nvme_io": false, 00:05:25.188 "read": true, 00:05:25.188 "reset": true, 00:05:25.188 "unmap": true, 00:05:25.188 "write": true, 00:05:25.188 "write_zeroes": true 00:05:25.188 }, 00:05:25.188 "uuid": "2eb373c7-f601-4a76-9443-01a3f4d73ce0", 00:05:25.188 "zoned": false 00:05:25.188 }, 00:05:25.188 { 00:05:25.188 "aliases": [ 00:05:25.188 "1d75df8b-3975-53f4-a535-1fc5275e6e34" 00:05:25.188 ], 00:05:25.188 "assigned_rate_limits": { 00:05:25.188 "r_mbytes_per_sec": 0, 00:05:25.188 "rw_ios_per_sec": 0, 00:05:25.188 "rw_mbytes_per_sec": 0, 00:05:25.188 "w_mbytes_per_sec": 0 00:05:25.188 }, 00:05:25.188 "block_size": 512, 00:05:25.188 "claimed": false, 00:05:25.188 "driver_specific": { 00:05:25.188 "passthru": { 00:05:25.188 "base_bdev_name": "Malloc3", 00:05:25.188 "name": "Passthru0" 00:05:25.188 } 00:05:25.188 }, 00:05:25.188 "memory_domains": [ 00:05:25.188 { 00:05:25.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.188 "dma_device_type": 2 00:05:25.188 } 00:05:25.188 ], 00:05:25.188 "name": "Passthru0", 00:05:25.188 "num_blocks": 16384, 00:05:25.188 "product_name": "passthru", 00:05:25.188 "supported_io_types": { 00:05:25.188 "abort": true, 00:05:25.188 "compare": false, 00:05:25.188 "compare_and_write": false, 00:05:25.188 "flush": true, 00:05:25.188 "nvme_admin": false, 00:05:25.188 "nvme_io": false, 00:05:25.188 "read": true, 00:05:25.188 "reset": true, 00:05:25.188 "unmap": true, 00:05:25.188 "write": true, 00:05:25.188 "write_zeroes": true 00:05:25.188 }, 00:05:25.188 "uuid": "1d75df8b-3975-53f4-a535-1fc5275e6e34", 00:05:25.188 "zoned": false 00:05:25.188 } 00:05:25.188 ]' 00:05:25.188 04:01:26 -- rpc/rpc.sh@21 -- # jq length 00:05:25.188 04:01:26 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:25.188 04:01:26 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:25.188 04:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.188 04:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:25.188 04:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.188 04:01:26 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:25.188 04:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.188 04:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:25.448 04:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.448 04:01:26 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:25.448 04:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.448 04:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:25.448 04:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.448 04:01:26 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:25.448 04:01:26 -- rpc/rpc.sh@26 -- # jq length 00:05:25.448 ************************************ 00:05:25.448 END TEST rpc_daemon_integrity 00:05:25.448 ************************************ 00:05:25.448 04:01:27 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:25.448 00:05:25.448 real 0m0.319s 00:05:25.448 user 0m0.221s 00:05:25.448 sys 0m0.031s 00:05:25.448 04:01:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:25.448 04:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:25.448 04:01:27 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:25.448 04:01:27 -- rpc/rpc.sh@84 -- # killprocess 67572 00:05:25.448 04:01:27 -- common/autotest_common.sh@936 -- # '[' -z 67572 ']' 00:05:25.448 04:01:27 -- common/autotest_common.sh@940 -- # kill -0 67572 00:05:25.448 04:01:27 -- common/autotest_common.sh@941 -- # uname 00:05:25.448 04:01:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:25.448 04:01:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67572 00:05:25.448 killing process with pid 67572 00:05:25.448 04:01:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:25.448 04:01:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:25.448 04:01:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67572' 00:05:25.448 04:01:27 -- common/autotest_common.sh@955 -- # kill 67572 00:05:25.448 04:01:27 -- common/autotest_common.sh@960 -- # wait 67572 00:05:25.707 00:05:25.707 real 0m3.201s 00:05:25.707 user 0m4.156s 00:05:25.708 sys 0m0.850s 00:05:25.708 04:01:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:25.708 ************************************ 00:05:25.708 END TEST rpc 00:05:25.708 ************************************ 00:05:25.708 04:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:25.966 04:01:27 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:25.966 04:01:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:25.966 04:01:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.966 04:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:25.966 ************************************ 00:05:25.966 START TEST rpc_client 00:05:25.966 ************************************ 00:05:25.966 04:01:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:25.966 * Looking for test storage... 00:05:25.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:25.966 04:01:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:25.966 04:01:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:25.966 04:01:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:25.966 04:01:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:25.966 04:01:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:25.966 04:01:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:25.966 04:01:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:25.966 04:01:27 -- scripts/common.sh@335 -- # IFS=.-: 00:05:25.966 04:01:27 -- scripts/common.sh@335 -- # read -ra ver1 00:05:25.966 04:01:27 -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.966 04:01:27 -- scripts/common.sh@336 -- # read -ra ver2 00:05:25.966 04:01:27 -- scripts/common.sh@337 -- # local 'op=<' 00:05:25.966 04:01:27 -- scripts/common.sh@339 -- # ver1_l=2 00:05:25.966 04:01:27 -- scripts/common.sh@340 -- # ver2_l=1 00:05:25.966 04:01:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:25.966 04:01:27 -- scripts/common.sh@343 -- # case "$op" in 00:05:25.966 04:01:27 -- scripts/common.sh@344 -- # : 1 00:05:25.966 04:01:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:25.966 04:01:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.966 04:01:27 -- scripts/common.sh@364 -- # decimal 1 00:05:25.967 04:01:27 -- scripts/common.sh@352 -- # local d=1 00:05:25.967 04:01:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.967 04:01:27 -- scripts/common.sh@354 -- # echo 1 00:05:25.967 04:01:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:25.967 04:01:27 -- scripts/common.sh@365 -- # decimal 2 00:05:25.967 04:01:27 -- scripts/common.sh@352 -- # local d=2 00:05:25.967 04:01:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.967 04:01:27 -- scripts/common.sh@354 -- # echo 2 00:05:25.967 04:01:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:25.967 04:01:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:25.967 04:01:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:25.967 04:01:27 -- scripts/common.sh@367 -- # return 0 00:05:25.967 04:01:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.967 04:01:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:25.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.967 --rc genhtml_branch_coverage=1 00:05:25.967 --rc genhtml_function_coverage=1 00:05:25.967 --rc genhtml_legend=1 00:05:25.967 --rc geninfo_all_blocks=1 00:05:25.967 --rc geninfo_unexecuted_blocks=1 00:05:25.967 00:05:25.967 ' 00:05:25.967 04:01:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:25.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.967 --rc genhtml_branch_coverage=1 00:05:25.967 --rc genhtml_function_coverage=1 00:05:25.967 --rc genhtml_legend=1 00:05:25.967 --rc geninfo_all_blocks=1 00:05:25.967 --rc geninfo_unexecuted_blocks=1 00:05:25.967 00:05:25.967 ' 00:05:25.967 04:01:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:25.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.967 --rc genhtml_branch_coverage=1 00:05:25.967 --rc genhtml_function_coverage=1 00:05:25.967 --rc genhtml_legend=1 00:05:25.967 --rc geninfo_all_blocks=1 00:05:25.967 --rc geninfo_unexecuted_blocks=1 00:05:25.967 00:05:25.967 ' 00:05:25.967 04:01:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:25.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.967 --rc genhtml_branch_coverage=1 00:05:25.967 --rc genhtml_function_coverage=1 00:05:25.967 --rc genhtml_legend=1 00:05:25.967 --rc geninfo_all_blocks=1 00:05:25.967 --rc geninfo_unexecuted_blocks=1 00:05:25.967 00:05:25.967 ' 00:05:25.967 04:01:27 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:25.967 OK 00:05:25.967 04:01:27 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:25.967 00:05:25.967 real 0m0.212s 00:05:25.967 user 0m0.134s 00:05:25.967 sys 0m0.088s 00:05:25.967 04:01:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:25.967 ************************************ 00:05:25.967 END TEST rpc_client 00:05:25.967 ************************************ 00:05:25.967 04:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:26.226 04:01:27 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:26.226 04:01:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.226 04:01:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.226 04:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:26.226 ************************************ 00:05:26.226 START TEST json_config 00:05:26.226 ************************************ 00:05:26.226 04:01:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:26.226 04:01:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:26.226 04:01:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:26.226 04:01:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:26.226 04:01:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:26.226 04:01:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:26.226 04:01:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:26.226 04:01:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:26.226 04:01:27 -- scripts/common.sh@335 -- # IFS=.-: 00:05:26.226 04:01:27 -- scripts/common.sh@335 -- # read -ra ver1 00:05:26.226 04:01:27 -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.226 04:01:27 -- scripts/common.sh@336 -- # read -ra ver2 00:05:26.226 04:01:27 -- scripts/common.sh@337 -- # local 'op=<' 00:05:26.226 04:01:27 -- scripts/common.sh@339 -- # ver1_l=2 00:05:26.226 04:01:27 -- scripts/common.sh@340 -- # ver2_l=1 00:05:26.226 04:01:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:26.226 04:01:27 -- scripts/common.sh@343 -- # case "$op" in 00:05:26.226 04:01:27 -- scripts/common.sh@344 -- # : 1 00:05:26.226 04:01:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:26.226 04:01:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.226 04:01:27 -- scripts/common.sh@364 -- # decimal 1 00:05:26.226 04:01:27 -- scripts/common.sh@352 -- # local d=1 00:05:26.226 04:01:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.226 04:01:27 -- scripts/common.sh@354 -- # echo 1 00:05:26.226 04:01:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:26.226 04:01:27 -- scripts/common.sh@365 -- # decimal 2 00:05:26.226 04:01:27 -- scripts/common.sh@352 -- # local d=2 00:05:26.226 04:01:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.226 04:01:27 -- scripts/common.sh@354 -- # echo 2 00:05:26.226 04:01:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:26.226 04:01:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:26.226 04:01:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:26.226 04:01:27 -- scripts/common.sh@367 -- # return 0 00:05:26.226 04:01:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.226 04:01:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:26.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.226 --rc genhtml_branch_coverage=1 00:05:26.226 --rc genhtml_function_coverage=1 00:05:26.226 --rc genhtml_legend=1 00:05:26.226 --rc geninfo_all_blocks=1 00:05:26.226 --rc geninfo_unexecuted_blocks=1 00:05:26.226 00:05:26.226 ' 00:05:26.226 04:01:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:26.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.226 --rc genhtml_branch_coverage=1 00:05:26.226 --rc genhtml_function_coverage=1 00:05:26.226 --rc genhtml_legend=1 00:05:26.226 --rc geninfo_all_blocks=1 00:05:26.226 --rc geninfo_unexecuted_blocks=1 00:05:26.226 00:05:26.226 ' 00:05:26.226 04:01:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:26.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.226 --rc genhtml_branch_coverage=1 00:05:26.226 --rc genhtml_function_coverage=1 00:05:26.226 --rc genhtml_legend=1 00:05:26.226 --rc geninfo_all_blocks=1 00:05:26.226 --rc geninfo_unexecuted_blocks=1 00:05:26.226 00:05:26.226 ' 00:05:26.226 04:01:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:26.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.226 --rc genhtml_branch_coverage=1 00:05:26.226 --rc genhtml_function_coverage=1 00:05:26.226 --rc genhtml_legend=1 00:05:26.226 --rc geninfo_all_blocks=1 00:05:26.226 --rc geninfo_unexecuted_blocks=1 00:05:26.226 00:05:26.226 ' 00:05:26.226 04:01:27 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:26.226 04:01:27 -- nvmf/common.sh@7 -- # uname -s 00:05:26.226 04:01:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.226 04:01:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.226 04:01:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.226 04:01:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.226 04:01:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.226 04:01:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.226 04:01:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.226 04:01:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.226 04:01:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.226 04:01:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.226 04:01:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:05:26.226 04:01:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:05:26.226 04:01:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.226 04:01:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.226 04:01:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:26.226 04:01:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:26.226 04:01:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.226 04:01:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.227 04:01:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.227 04:01:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.227 04:01:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.227 04:01:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.227 04:01:27 -- paths/export.sh@5 -- # export PATH 00:05:26.227 04:01:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.227 04:01:27 -- nvmf/common.sh@46 -- # : 0 00:05:26.227 04:01:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:26.227 04:01:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:26.227 04:01:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:26.227 04:01:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.227 04:01:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.227 04:01:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:26.227 04:01:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:26.227 04:01:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:26.227 04:01:27 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:26.227 04:01:27 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:26.227 04:01:27 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:26.227 04:01:27 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:26.227 04:01:27 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:26.227 04:01:27 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:26.227 04:01:27 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:26.227 04:01:27 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:26.227 04:01:27 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:26.227 04:01:27 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:26.227 04:01:27 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:26.227 04:01:27 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:26.227 04:01:27 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:26.227 04:01:27 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:26.227 04:01:27 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:26.227 INFO: JSON configuration test init 00:05:26.227 04:01:27 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:26.227 04:01:27 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:26.227 04:01:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.227 04:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:26.227 04:01:27 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:26.227 04:01:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.227 04:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:26.227 04:01:27 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:26.227 04:01:27 -- json_config/json_config.sh@98 -- # local app=target 00:05:26.227 04:01:27 -- json_config/json_config.sh@99 -- # shift 00:05:26.227 04:01:27 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:26.227 04:01:27 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:26.227 04:01:27 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:26.227 04:01:27 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:26.227 04:01:27 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:26.227 04:01:27 -- json_config/json_config.sh@111 -- # app_pid[$app]=67900 00:05:26.227 Waiting for target to run... 00:05:26.227 04:01:27 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:26.227 04:01:27 -- json_config/json_config.sh@114 -- # waitforlisten 67900 /var/tmp/spdk_tgt.sock 00:05:26.227 04:01:27 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:26.227 04:01:27 -- common/autotest_common.sh@829 -- # '[' -z 67900 ']' 00:05:26.227 04:01:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:26.227 04:01:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.227 04:01:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:26.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:26.227 04:01:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.227 04:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:26.486 [2024-11-26 04:01:28.037822] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:26.486 [2024-11-26 04:01:28.038085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67900 ] 00:05:27.052 [2024-11-26 04:01:28.594912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.053 [2024-11-26 04:01:28.673857] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:27.053 [2024-11-26 04:01:28.674083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.311 00:05:27.311 04:01:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.311 04:01:28 -- common/autotest_common.sh@862 -- # return 0 00:05:27.311 04:01:28 -- json_config/json_config.sh@115 -- # echo '' 00:05:27.311 04:01:28 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:27.311 04:01:28 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:27.311 04:01:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:27.311 04:01:28 -- common/autotest_common.sh@10 -- # set +x 00:05:27.311 04:01:28 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:27.311 04:01:28 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:27.311 04:01:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:27.311 04:01:28 -- common/autotest_common.sh@10 -- # set +x 00:05:27.311 04:01:29 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:27.311 04:01:29 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:27.311 04:01:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:27.877 04:01:29 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:27.877 04:01:29 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:27.877 04:01:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:27.877 04:01:29 -- common/autotest_common.sh@10 -- # set +x 00:05:27.877 04:01:29 -- json_config/json_config.sh@48 -- # local ret=0 00:05:27.877 04:01:29 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:27.877 04:01:29 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:27.877 04:01:29 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:27.877 04:01:29 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:27.877 04:01:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:28.136 04:01:29 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:28.136 04:01:29 -- json_config/json_config.sh@51 -- # local get_types 00:05:28.136 04:01:29 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:28.136 04:01:29 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:28.136 04:01:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:28.136 04:01:29 -- common/autotest_common.sh@10 -- # set +x 00:05:28.136 04:01:29 -- json_config/json_config.sh@58 -- # return 0 00:05:28.136 04:01:29 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:28.136 04:01:29 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:28.136 04:01:29 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:28.136 04:01:29 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:28.136 04:01:29 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:28.136 04:01:29 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:28.136 04:01:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.136 04:01:29 -- common/autotest_common.sh@10 -- # set +x 00:05:28.136 04:01:29 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:28.136 04:01:29 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:28.136 04:01:29 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:28.136 04:01:29 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:28.136 04:01:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:28.394 MallocForNvmf0 00:05:28.394 04:01:30 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:28.394 04:01:30 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:28.653 MallocForNvmf1 00:05:28.653 04:01:30 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:28.653 04:01:30 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:28.912 [2024-11-26 04:01:30.535083] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:28.912 04:01:30 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:28.912 04:01:30 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:29.171 04:01:30 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:29.171 04:01:30 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:29.430 04:01:30 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:29.430 04:01:30 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:29.689 04:01:31 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:29.689 04:01:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:29.948 [2024-11-26 04:01:31.523549] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:29.948 04:01:31 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:29.948 04:01:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.948 04:01:31 -- common/autotest_common.sh@10 -- # set +x 00:05:29.948 04:01:31 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:29.948 04:01:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.948 04:01:31 -- common/autotest_common.sh@10 -- # set +x 00:05:29.948 04:01:31 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:29.948 04:01:31 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:29.948 04:01:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:30.207 MallocBdevForConfigChangeCheck 00:05:30.207 04:01:31 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:30.207 04:01:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.207 04:01:31 -- common/autotest_common.sh@10 -- # set +x 00:05:30.207 04:01:31 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:30.207 04:01:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:30.773 INFO: shutting down applications... 00:05:30.773 04:01:32 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:30.773 04:01:32 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:30.773 04:01:32 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:30.773 04:01:32 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:30.773 04:01:32 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:31.032 Calling clear_iscsi_subsystem 00:05:31.032 Calling clear_nvmf_subsystem 00:05:31.032 Calling clear_nbd_subsystem 00:05:31.032 Calling clear_ublk_subsystem 00:05:31.032 Calling clear_vhost_blk_subsystem 00:05:31.032 Calling clear_vhost_scsi_subsystem 00:05:31.032 Calling clear_scheduler_subsystem 00:05:31.032 Calling clear_bdev_subsystem 00:05:31.032 Calling clear_accel_subsystem 00:05:31.032 Calling clear_vmd_subsystem 00:05:31.032 Calling clear_sock_subsystem 00:05:31.032 Calling clear_iobuf_subsystem 00:05:31.032 04:01:32 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:31.032 04:01:32 -- json_config/json_config.sh@396 -- # count=100 00:05:31.032 04:01:32 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:31.032 04:01:32 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:31.032 04:01:32 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:31.032 04:01:32 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:31.291 04:01:33 -- json_config/json_config.sh@398 -- # break 00:05:31.291 04:01:33 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:31.291 04:01:33 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:31.291 04:01:33 -- json_config/json_config.sh@120 -- # local app=target 00:05:31.291 04:01:33 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:31.291 04:01:33 -- json_config/json_config.sh@124 -- # [[ -n 67900 ]] 00:05:31.291 04:01:33 -- json_config/json_config.sh@127 -- # kill -SIGINT 67900 00:05:31.291 04:01:33 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:31.291 04:01:33 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:31.291 04:01:33 -- json_config/json_config.sh@130 -- # kill -0 67900 00:05:31.291 04:01:33 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:31.858 04:01:33 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:31.858 04:01:33 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:31.858 04:01:33 -- json_config/json_config.sh@130 -- # kill -0 67900 00:05:31.858 04:01:33 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:31.858 04:01:33 -- json_config/json_config.sh@132 -- # break 00:05:31.858 04:01:33 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:31.858 04:01:33 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:31.858 SPDK target shutdown done 00:05:31.858 INFO: relaunching applications... 00:05:31.858 04:01:33 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:31.858 04:01:33 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:31.858 04:01:33 -- json_config/json_config.sh@98 -- # local app=target 00:05:31.858 04:01:33 -- json_config/json_config.sh@99 -- # shift 00:05:31.858 04:01:33 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:31.858 04:01:33 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:31.858 04:01:33 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:31.858 04:01:33 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:31.858 04:01:33 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:31.858 04:01:33 -- json_config/json_config.sh@111 -- # app_pid[$app]=68169 00:05:31.858 04:01:33 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:31.858 Waiting for target to run... 00:05:31.858 04:01:33 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:31.858 04:01:33 -- json_config/json_config.sh@114 -- # waitforlisten 68169 /var/tmp/spdk_tgt.sock 00:05:31.858 04:01:33 -- common/autotest_common.sh@829 -- # '[' -z 68169 ']' 00:05:31.858 04:01:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:31.858 04:01:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:31.858 04:01:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:31.858 04:01:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.858 04:01:33 -- common/autotest_common.sh@10 -- # set +x 00:05:31.858 [2024-11-26 04:01:33.605350] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:31.858 [2024-11-26 04:01:33.605442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68169 ] 00:05:32.426 [2024-11-26 04:01:34.006747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.426 [2024-11-26 04:01:34.058111] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:32.426 [2024-11-26 04:01:34.058275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.684 [2024-11-26 04:01:34.353296] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:32.684 [2024-11-26 04:01:34.385382] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:32.943 04:01:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.943 04:01:34 -- common/autotest_common.sh@862 -- # return 0 00:05:32.943 00:05:32.943 04:01:34 -- json_config/json_config.sh@115 -- # echo '' 00:05:32.943 04:01:34 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:32.943 INFO: Checking if target configuration is the same... 00:05:32.943 04:01:34 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:32.943 04:01:34 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:32.943 04:01:34 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:32.943 04:01:34 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.943 + '[' 2 -ne 2 ']' 00:05:32.943 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:32.943 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:32.943 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:32.943 +++ basename /dev/fd/62 00:05:32.943 ++ mktemp /tmp/62.XXX 00:05:32.943 + tmp_file_1=/tmp/62.XIs 00:05:32.943 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:32.943 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:32.943 + tmp_file_2=/tmp/spdk_tgt_config.json.vuO 00:05:32.943 + ret=0 00:05:32.943 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:33.203 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:33.461 + diff -u /tmp/62.XIs /tmp/spdk_tgt_config.json.vuO 00:05:33.461 + echo 'INFO: JSON config files are the same' 00:05:33.461 INFO: JSON config files are the same 00:05:33.461 + rm /tmp/62.XIs /tmp/spdk_tgt_config.json.vuO 00:05:33.461 + exit 0 00:05:33.461 04:01:34 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:33.461 INFO: changing configuration and checking if this can be detected... 00:05:33.461 04:01:34 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:33.461 04:01:34 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:33.461 04:01:34 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:33.723 04:01:35 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:33.723 04:01:35 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:33.723 04:01:35 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.723 + '[' 2 -ne 2 ']' 00:05:33.723 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:33.723 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:33.723 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:33.723 +++ basename /dev/fd/62 00:05:33.723 ++ mktemp /tmp/62.XXX 00:05:33.723 + tmp_file_1=/tmp/62.XWa 00:05:33.723 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:33.723 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:33.723 + tmp_file_2=/tmp/spdk_tgt_config.json.vKp 00:05:33.723 + ret=0 00:05:33.723 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:33.982 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:33.982 + diff -u /tmp/62.XWa /tmp/spdk_tgt_config.json.vKp 00:05:33.982 + ret=1 00:05:33.982 + echo '=== Start of file: /tmp/62.XWa ===' 00:05:33.982 + cat /tmp/62.XWa 00:05:33.982 + echo '=== End of file: /tmp/62.XWa ===' 00:05:33.982 + echo '' 00:05:33.982 + echo '=== Start of file: /tmp/spdk_tgt_config.json.vKp ===' 00:05:33.982 + cat /tmp/spdk_tgt_config.json.vKp 00:05:33.982 + echo '=== End of file: /tmp/spdk_tgt_config.json.vKp ===' 00:05:33.982 + echo '' 00:05:33.982 + rm /tmp/62.XWa /tmp/spdk_tgt_config.json.vKp 00:05:33.982 + exit 1 00:05:33.982 INFO: configuration change detected. 00:05:33.982 04:01:35 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:33.982 04:01:35 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:33.982 04:01:35 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:33.982 04:01:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.982 04:01:35 -- common/autotest_common.sh@10 -- # set +x 00:05:33.982 04:01:35 -- json_config/json_config.sh@360 -- # local ret=0 00:05:33.982 04:01:35 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:33.982 04:01:35 -- json_config/json_config.sh@370 -- # [[ -n 68169 ]] 00:05:33.982 04:01:35 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:33.982 04:01:35 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:33.982 04:01:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.982 04:01:35 -- common/autotest_common.sh@10 -- # set +x 00:05:33.982 04:01:35 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:33.982 04:01:35 -- json_config/json_config.sh@246 -- # uname -s 00:05:33.982 04:01:35 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:33.982 04:01:35 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:33.982 04:01:35 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:33.982 04:01:35 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:33.982 04:01:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.982 04:01:35 -- common/autotest_common.sh@10 -- # set +x 00:05:34.241 04:01:35 -- json_config/json_config.sh@376 -- # killprocess 68169 00:05:34.241 04:01:35 -- common/autotest_common.sh@936 -- # '[' -z 68169 ']' 00:05:34.241 04:01:35 -- common/autotest_common.sh@940 -- # kill -0 68169 00:05:34.241 04:01:35 -- common/autotest_common.sh@941 -- # uname 00:05:34.241 04:01:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:34.241 04:01:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68169 00:05:34.241 04:01:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:34.241 04:01:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:34.241 killing process with pid 68169 00:05:34.241 04:01:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68169' 00:05:34.241 04:01:35 -- common/autotest_common.sh@955 -- # kill 68169 00:05:34.241 04:01:35 -- common/autotest_common.sh@960 -- # wait 68169 00:05:34.241 04:01:35 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:34.241 04:01:35 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:34.241 04:01:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.241 04:01:35 -- common/autotest_common.sh@10 -- # set +x 00:05:34.500 04:01:36 -- json_config/json_config.sh@381 -- # return 0 00:05:34.500 04:01:36 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:34.500 INFO: Success 00:05:34.500 00:05:34.500 real 0m8.266s 00:05:34.500 user 0m11.548s 00:05:34.500 sys 0m1.947s 00:05:34.500 04:01:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.500 04:01:36 -- common/autotest_common.sh@10 -- # set +x 00:05:34.500 ************************************ 00:05:34.500 END TEST json_config 00:05:34.500 ************************************ 00:05:34.500 04:01:36 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:34.500 04:01:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.500 04:01:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.500 04:01:36 -- common/autotest_common.sh@10 -- # set +x 00:05:34.500 ************************************ 00:05:34.500 START TEST json_config_extra_key 00:05:34.500 ************************************ 00:05:34.501 04:01:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:34.501 04:01:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:34.501 04:01:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:34.501 04:01:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:34.501 04:01:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:34.501 04:01:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:34.501 04:01:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:34.501 04:01:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:34.501 04:01:36 -- scripts/common.sh@335 -- # IFS=.-: 00:05:34.501 04:01:36 -- scripts/common.sh@335 -- # read -ra ver1 00:05:34.501 04:01:36 -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.501 04:01:36 -- scripts/common.sh@336 -- # read -ra ver2 00:05:34.501 04:01:36 -- scripts/common.sh@337 -- # local 'op=<' 00:05:34.501 04:01:36 -- scripts/common.sh@339 -- # ver1_l=2 00:05:34.501 04:01:36 -- scripts/common.sh@340 -- # ver2_l=1 00:05:34.501 04:01:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:34.501 04:01:36 -- scripts/common.sh@343 -- # case "$op" in 00:05:34.501 04:01:36 -- scripts/common.sh@344 -- # : 1 00:05:34.501 04:01:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:34.501 04:01:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.501 04:01:36 -- scripts/common.sh@364 -- # decimal 1 00:05:34.501 04:01:36 -- scripts/common.sh@352 -- # local d=1 00:05:34.501 04:01:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.501 04:01:36 -- scripts/common.sh@354 -- # echo 1 00:05:34.501 04:01:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:34.501 04:01:36 -- scripts/common.sh@365 -- # decimal 2 00:05:34.501 04:01:36 -- scripts/common.sh@352 -- # local d=2 00:05:34.501 04:01:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.501 04:01:36 -- scripts/common.sh@354 -- # echo 2 00:05:34.501 04:01:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:34.501 04:01:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:34.501 04:01:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:34.501 04:01:36 -- scripts/common.sh@367 -- # return 0 00:05:34.501 04:01:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.501 04:01:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:34.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.501 --rc genhtml_branch_coverage=1 00:05:34.501 --rc genhtml_function_coverage=1 00:05:34.501 --rc genhtml_legend=1 00:05:34.501 --rc geninfo_all_blocks=1 00:05:34.501 --rc geninfo_unexecuted_blocks=1 00:05:34.501 00:05:34.501 ' 00:05:34.501 04:01:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:34.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.501 --rc genhtml_branch_coverage=1 00:05:34.501 --rc genhtml_function_coverage=1 00:05:34.501 --rc genhtml_legend=1 00:05:34.501 --rc geninfo_all_blocks=1 00:05:34.501 --rc geninfo_unexecuted_blocks=1 00:05:34.501 00:05:34.501 ' 00:05:34.501 04:01:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:34.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.501 --rc genhtml_branch_coverage=1 00:05:34.501 --rc genhtml_function_coverage=1 00:05:34.501 --rc genhtml_legend=1 00:05:34.501 --rc geninfo_all_blocks=1 00:05:34.501 --rc geninfo_unexecuted_blocks=1 00:05:34.501 00:05:34.501 ' 00:05:34.501 04:01:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:34.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.501 --rc genhtml_branch_coverage=1 00:05:34.501 --rc genhtml_function_coverage=1 00:05:34.501 --rc genhtml_legend=1 00:05:34.501 --rc geninfo_all_blocks=1 00:05:34.501 --rc geninfo_unexecuted_blocks=1 00:05:34.501 00:05:34.501 ' 00:05:34.501 04:01:36 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:34.501 04:01:36 -- nvmf/common.sh@7 -- # uname -s 00:05:34.501 04:01:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.501 04:01:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.501 04:01:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.501 04:01:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.501 04:01:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.501 04:01:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.501 04:01:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.501 04:01:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.501 04:01:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.501 04:01:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.501 04:01:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:05:34.501 04:01:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:05:34.501 04:01:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.501 04:01:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.501 04:01:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:34.501 04:01:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:34.501 04:01:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.501 04:01:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.501 04:01:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.501 04:01:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.501 04:01:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.501 04:01:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.501 04:01:36 -- paths/export.sh@5 -- # export PATH 00:05:34.501 04:01:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.501 04:01:36 -- nvmf/common.sh@46 -- # : 0 00:05:34.501 04:01:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:34.501 04:01:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:34.501 04:01:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:34.501 04:01:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.501 04:01:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.501 04:01:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:34.501 04:01:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:34.501 04:01:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:34.501 04:01:36 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:34.501 04:01:36 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:34.501 04:01:36 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:34.501 04:01:36 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:34.501 04:01:36 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:34.501 04:01:36 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:34.501 04:01:36 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:34.501 04:01:36 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:34.501 04:01:36 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:34.501 INFO: launching applications... 00:05:34.501 04:01:36 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:34.501 04:01:36 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:34.501 04:01:36 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:34.501 04:01:36 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:34.501 04:01:36 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:34.501 04:01:36 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:34.501 04:01:36 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=68352 00:05:34.501 Waiting for target to run... 00:05:34.501 04:01:36 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:34.501 04:01:36 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 68352 /var/tmp/spdk_tgt.sock 00:05:34.501 04:01:36 -- common/autotest_common.sh@829 -- # '[' -z 68352 ']' 00:05:34.501 04:01:36 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:34.501 04:01:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:34.501 04:01:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:34.501 04:01:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:34.501 04:01:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.501 04:01:36 -- common/autotest_common.sh@10 -- # set +x 00:05:34.760 [2024-11-26 04:01:36.302191] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:34.760 [2024-11-26 04:01:36.302303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68352 ] 00:05:35.019 [2024-11-26 04:01:36.746317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.278 [2024-11-26 04:01:36.796586] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:35.278 [2024-11-26 04:01:36.796757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.537 04:01:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.537 04:01:37 -- common/autotest_common.sh@862 -- # return 0 00:05:35.537 00:05:35.537 04:01:37 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:35.537 INFO: shutting down applications... 00:05:35.537 04:01:37 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:35.537 04:01:37 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:35.537 04:01:37 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:35.537 04:01:37 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:35.537 04:01:37 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 68352 ]] 00:05:35.537 04:01:37 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 68352 00:05:35.537 04:01:37 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:35.537 04:01:37 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:35.537 04:01:37 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68352 00:05:35.537 04:01:37 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:36.104 04:01:37 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:36.104 04:01:37 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:36.104 04:01:37 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68352 00:05:36.104 04:01:37 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:36.104 04:01:37 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:36.104 04:01:37 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:36.104 SPDK target shutdown done 00:05:36.104 04:01:37 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:36.104 Success 00:05:36.104 04:01:37 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:36.104 00:05:36.104 real 0m1.653s 00:05:36.104 user 0m1.447s 00:05:36.104 sys 0m0.453s 00:05:36.104 04:01:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.104 04:01:37 -- common/autotest_common.sh@10 -- # set +x 00:05:36.104 ************************************ 00:05:36.104 END TEST json_config_extra_key 00:05:36.104 ************************************ 00:05:36.104 04:01:37 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:36.104 04:01:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.104 04:01:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.104 04:01:37 -- common/autotest_common.sh@10 -- # set +x 00:05:36.104 ************************************ 00:05:36.104 START TEST alias_rpc 00:05:36.104 ************************************ 00:05:36.104 04:01:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:36.104 * Looking for test storage... 00:05:36.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:36.363 04:01:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:36.363 04:01:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:36.363 04:01:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:36.363 04:01:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:36.363 04:01:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:36.363 04:01:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:36.363 04:01:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:36.363 04:01:37 -- scripts/common.sh@335 -- # IFS=.-: 00:05:36.363 04:01:37 -- scripts/common.sh@335 -- # read -ra ver1 00:05:36.363 04:01:37 -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.363 04:01:37 -- scripts/common.sh@336 -- # read -ra ver2 00:05:36.363 04:01:37 -- scripts/common.sh@337 -- # local 'op=<' 00:05:36.363 04:01:37 -- scripts/common.sh@339 -- # ver1_l=2 00:05:36.363 04:01:37 -- scripts/common.sh@340 -- # ver2_l=1 00:05:36.363 04:01:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:36.363 04:01:37 -- scripts/common.sh@343 -- # case "$op" in 00:05:36.363 04:01:37 -- scripts/common.sh@344 -- # : 1 00:05:36.363 04:01:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:36.363 04:01:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.363 04:01:37 -- scripts/common.sh@364 -- # decimal 1 00:05:36.363 04:01:37 -- scripts/common.sh@352 -- # local d=1 00:05:36.363 04:01:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.363 04:01:37 -- scripts/common.sh@354 -- # echo 1 00:05:36.363 04:01:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:36.363 04:01:37 -- scripts/common.sh@365 -- # decimal 2 00:05:36.363 04:01:37 -- scripts/common.sh@352 -- # local d=2 00:05:36.363 04:01:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.363 04:01:37 -- scripts/common.sh@354 -- # echo 2 00:05:36.363 04:01:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:36.363 04:01:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:36.363 04:01:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:36.363 04:01:37 -- scripts/common.sh@367 -- # return 0 00:05:36.363 04:01:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.363 04:01:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:36.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.363 --rc genhtml_branch_coverage=1 00:05:36.363 --rc genhtml_function_coverage=1 00:05:36.363 --rc genhtml_legend=1 00:05:36.363 --rc geninfo_all_blocks=1 00:05:36.363 --rc geninfo_unexecuted_blocks=1 00:05:36.363 00:05:36.363 ' 00:05:36.363 04:01:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:36.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.363 --rc genhtml_branch_coverage=1 00:05:36.363 --rc genhtml_function_coverage=1 00:05:36.363 --rc genhtml_legend=1 00:05:36.363 --rc geninfo_all_blocks=1 00:05:36.363 --rc geninfo_unexecuted_blocks=1 00:05:36.363 00:05:36.363 ' 00:05:36.363 04:01:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:36.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.363 --rc genhtml_branch_coverage=1 00:05:36.363 --rc genhtml_function_coverage=1 00:05:36.363 --rc genhtml_legend=1 00:05:36.363 --rc geninfo_all_blocks=1 00:05:36.363 --rc geninfo_unexecuted_blocks=1 00:05:36.363 00:05:36.363 ' 00:05:36.363 04:01:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:36.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.364 --rc genhtml_branch_coverage=1 00:05:36.364 --rc genhtml_function_coverage=1 00:05:36.364 --rc genhtml_legend=1 00:05:36.364 --rc geninfo_all_blocks=1 00:05:36.364 --rc geninfo_unexecuted_blocks=1 00:05:36.364 00:05:36.364 ' 00:05:36.364 04:01:37 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:36.364 04:01:37 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68430 00:05:36.364 04:01:37 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:36.364 04:01:37 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68430 00:05:36.364 04:01:37 -- common/autotest_common.sh@829 -- # '[' -z 68430 ']' 00:05:36.364 04:01:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.364 04:01:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.364 04:01:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.364 04:01:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.364 04:01:37 -- common/autotest_common.sh@10 -- # set +x 00:05:36.364 [2024-11-26 04:01:38.049506] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:36.364 [2024-11-26 04:01:38.049834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68430 ] 00:05:36.622 [2024-11-26 04:01:38.187847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.622 [2024-11-26 04:01:38.241586] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:36.622 [2024-11-26 04:01:38.242079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.557 04:01:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.557 04:01:39 -- common/autotest_common.sh@862 -- # return 0 00:05:37.557 04:01:39 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:37.815 04:01:39 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68430 00:05:37.815 04:01:39 -- common/autotest_common.sh@936 -- # '[' -z 68430 ']' 00:05:37.815 04:01:39 -- common/autotest_common.sh@940 -- # kill -0 68430 00:05:37.815 04:01:39 -- common/autotest_common.sh@941 -- # uname 00:05:37.815 04:01:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:37.815 04:01:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68430 00:05:37.815 killing process with pid 68430 00:05:37.815 04:01:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:37.815 04:01:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:37.815 04:01:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68430' 00:05:37.815 04:01:39 -- common/autotest_common.sh@955 -- # kill 68430 00:05:37.815 04:01:39 -- common/autotest_common.sh@960 -- # wait 68430 00:05:38.073 ************************************ 00:05:38.073 END TEST alias_rpc 00:05:38.073 ************************************ 00:05:38.073 00:05:38.073 real 0m1.930s 00:05:38.073 user 0m2.202s 00:05:38.073 sys 0m0.478s 00:05:38.073 04:01:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.073 04:01:39 -- common/autotest_common.sh@10 -- # set +x 00:05:38.073 04:01:39 -- spdk/autotest.sh@169 -- # [[ 1 -eq 0 ]] 00:05:38.073 04:01:39 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.073 04:01:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.073 04:01:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.073 04:01:39 -- common/autotest_common.sh@10 -- # set +x 00:05:38.073 ************************************ 00:05:38.073 START TEST dpdk_mem_utility 00:05:38.073 ************************************ 00:05:38.073 04:01:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.333 * Looking for test storage... 00:05:38.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:38.333 04:01:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:38.333 04:01:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:38.333 04:01:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:38.333 04:01:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:38.333 04:01:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:38.333 04:01:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:38.333 04:01:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:38.333 04:01:39 -- scripts/common.sh@335 -- # IFS=.-: 00:05:38.333 04:01:39 -- scripts/common.sh@335 -- # read -ra ver1 00:05:38.333 04:01:39 -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.333 04:01:39 -- scripts/common.sh@336 -- # read -ra ver2 00:05:38.333 04:01:39 -- scripts/common.sh@337 -- # local 'op=<' 00:05:38.333 04:01:39 -- scripts/common.sh@339 -- # ver1_l=2 00:05:38.333 04:01:39 -- scripts/common.sh@340 -- # ver2_l=1 00:05:38.333 04:01:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:38.333 04:01:39 -- scripts/common.sh@343 -- # case "$op" in 00:05:38.333 04:01:39 -- scripts/common.sh@344 -- # : 1 00:05:38.333 04:01:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:38.333 04:01:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.333 04:01:39 -- scripts/common.sh@364 -- # decimal 1 00:05:38.333 04:01:39 -- scripts/common.sh@352 -- # local d=1 00:05:38.333 04:01:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.333 04:01:39 -- scripts/common.sh@354 -- # echo 1 00:05:38.333 04:01:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:38.333 04:01:39 -- scripts/common.sh@365 -- # decimal 2 00:05:38.333 04:01:39 -- scripts/common.sh@352 -- # local d=2 00:05:38.333 04:01:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.333 04:01:39 -- scripts/common.sh@354 -- # echo 2 00:05:38.333 04:01:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:38.333 04:01:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:38.333 04:01:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:38.333 04:01:39 -- scripts/common.sh@367 -- # return 0 00:05:38.333 04:01:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.333 04:01:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:38.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.333 --rc genhtml_branch_coverage=1 00:05:38.333 --rc genhtml_function_coverage=1 00:05:38.333 --rc genhtml_legend=1 00:05:38.333 --rc geninfo_all_blocks=1 00:05:38.333 --rc geninfo_unexecuted_blocks=1 00:05:38.333 00:05:38.333 ' 00:05:38.333 04:01:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:38.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.333 --rc genhtml_branch_coverage=1 00:05:38.333 --rc genhtml_function_coverage=1 00:05:38.333 --rc genhtml_legend=1 00:05:38.333 --rc geninfo_all_blocks=1 00:05:38.333 --rc geninfo_unexecuted_blocks=1 00:05:38.333 00:05:38.333 ' 00:05:38.333 04:01:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:38.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.333 --rc genhtml_branch_coverage=1 00:05:38.333 --rc genhtml_function_coverage=1 00:05:38.333 --rc genhtml_legend=1 00:05:38.333 --rc geninfo_all_blocks=1 00:05:38.333 --rc geninfo_unexecuted_blocks=1 00:05:38.333 00:05:38.333 ' 00:05:38.333 04:01:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:38.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.333 --rc genhtml_branch_coverage=1 00:05:38.333 --rc genhtml_function_coverage=1 00:05:38.333 --rc genhtml_legend=1 00:05:38.333 --rc geninfo_all_blocks=1 00:05:38.333 --rc geninfo_unexecuted_blocks=1 00:05:38.333 00:05:38.333 ' 00:05:38.333 04:01:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:38.333 04:01:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68529 00:05:38.333 04:01:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:38.333 04:01:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68529 00:05:38.333 04:01:39 -- common/autotest_common.sh@829 -- # '[' -z 68529 ']' 00:05:38.333 04:01:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.333 04:01:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.333 04:01:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.333 04:01:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.333 04:01:39 -- common/autotest_common.sh@10 -- # set +x 00:05:38.333 [2024-11-26 04:01:40.033589] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:38.333 [2024-11-26 04:01:40.033695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68529 ] 00:05:38.591 [2024-11-26 04:01:40.172976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.591 [2024-11-26 04:01:40.227273] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:38.591 [2024-11-26 04:01:40.227433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.528 04:01:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.528 04:01:41 -- common/autotest_common.sh@862 -- # return 0 00:05:39.528 04:01:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:39.528 04:01:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:39.528 04:01:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.528 04:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:39.528 { 00:05:39.528 "filename": "/tmp/spdk_mem_dump.txt" 00:05:39.528 } 00:05:39.528 04:01:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.528 04:01:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:39.528 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:39.528 1 heaps totaling size 814.000000 MiB 00:05:39.528 size: 814.000000 MiB heap id: 0 00:05:39.528 end heaps---------- 00:05:39.528 8 mempools totaling size 598.116089 MiB 00:05:39.528 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:39.528 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:39.528 size: 84.521057 MiB name: bdev_io_68529 00:05:39.528 size: 51.011292 MiB name: evtpool_68529 00:05:39.528 size: 50.003479 MiB name: msgpool_68529 00:05:39.528 size: 21.763794 MiB name: PDU_Pool 00:05:39.528 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:39.528 size: 0.026123 MiB name: Session_Pool 00:05:39.528 end mempools------- 00:05:39.528 6 memzones totaling size 4.142822 MiB 00:05:39.528 size: 1.000366 MiB name: RG_ring_0_68529 00:05:39.528 size: 1.000366 MiB name: RG_ring_1_68529 00:05:39.528 size: 1.000366 MiB name: RG_ring_4_68529 00:05:39.528 size: 1.000366 MiB name: RG_ring_5_68529 00:05:39.528 size: 0.125366 MiB name: RG_ring_2_68529 00:05:39.528 size: 0.015991 MiB name: RG_ring_3_68529 00:05:39.528 end memzones------- 00:05:39.529 04:01:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:39.529 heap id: 0 total size: 814.000000 MiB number of busy elements: 215 number of free elements: 15 00:05:39.529 list of free elements. size: 12.487488 MiB 00:05:39.529 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:39.529 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:39.529 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:39.529 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:39.529 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:39.529 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:39.529 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:39.529 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:39.529 element at address: 0x200000200000 with size: 0.837219 MiB 00:05:39.529 element at address: 0x20001aa00000 with size: 0.572632 MiB 00:05:39.529 element at address: 0x20000b200000 with size: 0.489990 MiB 00:05:39.529 element at address: 0x200000800000 with size: 0.487061 MiB 00:05:39.529 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:39.529 element at address: 0x200027e00000 with size: 0.398132 MiB 00:05:39.529 element at address: 0x200003a00000 with size: 0.351685 MiB 00:05:39.529 list of standard malloc elements. size: 199.249939 MiB 00:05:39.529 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:39.529 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:39.529 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:39.529 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:39.529 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:39.529 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:39.529 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:39.529 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:39.529 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:39.529 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:39.529 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:39.529 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:39.529 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:39.530 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:39.530 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:39.530 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:39.530 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:39.530 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:39.530 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:39.530 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:39.530 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:39.530 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:39.530 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6cb80 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:39.530 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:39.530 list of memzone associated elements. size: 602.262573 MiB 00:05:39.530 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:39.530 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:39.530 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:39.530 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:39.530 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:39.530 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_68529_0 00:05:39.530 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:39.530 associated memzone info: size: 48.002930 MiB name: MP_evtpool_68529_0 00:05:39.530 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:39.530 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68529_0 00:05:39.530 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:39.530 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:39.530 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:39.530 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:39.530 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:39.530 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_68529 00:05:39.530 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:39.530 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68529 00:05:39.530 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:39.530 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68529 00:05:39.530 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:39.530 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:39.530 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:39.530 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:39.530 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:39.530 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:39.530 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:39.530 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:39.530 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:39.530 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68529 00:05:39.530 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:39.530 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68529 00:05:39.530 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:39.530 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68529 00:05:39.530 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:39.530 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68529 00:05:39.530 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:39.530 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68529 00:05:39.530 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:39.530 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:39.530 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:39.530 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:39.530 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:39.530 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:39.530 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:39.530 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68529 00:05:39.530 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:39.530 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:39.530 element at address: 0x200027e66040 with size: 0.023743 MiB 00:05:39.530 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:39.530 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:39.530 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68529 00:05:39.530 element at address: 0x200027e6c180 with size: 0.002441 MiB 00:05:39.530 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:39.530 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:39.530 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68529 00:05:39.530 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:39.530 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68529 00:05:39.530 element at address: 0x200027e6cc40 with size: 0.000305 MiB 00:05:39.530 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:39.530 04:01:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:39.530 04:01:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68529 00:05:39.530 04:01:41 -- common/autotest_common.sh@936 -- # '[' -z 68529 ']' 00:05:39.530 04:01:41 -- common/autotest_common.sh@940 -- # kill -0 68529 00:05:39.530 04:01:41 -- common/autotest_common.sh@941 -- # uname 00:05:39.530 04:01:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:39.530 04:01:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68529 00:05:39.530 killing process with pid 68529 00:05:39.531 04:01:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:39.531 04:01:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:39.531 04:01:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68529' 00:05:39.531 04:01:41 -- common/autotest_common.sh@955 -- # kill 68529 00:05:39.531 04:01:41 -- common/autotest_common.sh@960 -- # wait 68529 00:05:40.098 00:05:40.098 real 0m1.805s 00:05:40.098 user 0m1.960s 00:05:40.098 sys 0m0.470s 00:05:40.098 04:01:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.098 04:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:40.098 ************************************ 00:05:40.098 END TEST dpdk_mem_utility 00:05:40.098 ************************************ 00:05:40.098 04:01:41 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:40.098 04:01:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.098 04:01:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.098 04:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:40.098 ************************************ 00:05:40.098 START TEST event 00:05:40.098 ************************************ 00:05:40.098 04:01:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:40.098 * Looking for test storage... 00:05:40.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:40.098 04:01:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:40.098 04:01:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:40.098 04:01:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:40.098 04:01:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:40.098 04:01:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:40.098 04:01:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:40.098 04:01:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:40.098 04:01:41 -- scripts/common.sh@335 -- # IFS=.-: 00:05:40.098 04:01:41 -- scripts/common.sh@335 -- # read -ra ver1 00:05:40.098 04:01:41 -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.098 04:01:41 -- scripts/common.sh@336 -- # read -ra ver2 00:05:40.098 04:01:41 -- scripts/common.sh@337 -- # local 'op=<' 00:05:40.098 04:01:41 -- scripts/common.sh@339 -- # ver1_l=2 00:05:40.098 04:01:41 -- scripts/common.sh@340 -- # ver2_l=1 00:05:40.098 04:01:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:40.098 04:01:41 -- scripts/common.sh@343 -- # case "$op" in 00:05:40.098 04:01:41 -- scripts/common.sh@344 -- # : 1 00:05:40.098 04:01:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:40.098 04:01:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.098 04:01:41 -- scripts/common.sh@364 -- # decimal 1 00:05:40.098 04:01:41 -- scripts/common.sh@352 -- # local d=1 00:05:40.098 04:01:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.098 04:01:41 -- scripts/common.sh@354 -- # echo 1 00:05:40.098 04:01:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:40.098 04:01:41 -- scripts/common.sh@365 -- # decimal 2 00:05:40.098 04:01:41 -- scripts/common.sh@352 -- # local d=2 00:05:40.098 04:01:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.098 04:01:41 -- scripts/common.sh@354 -- # echo 2 00:05:40.098 04:01:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:40.098 04:01:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:40.098 04:01:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:40.098 04:01:41 -- scripts/common.sh@367 -- # return 0 00:05:40.098 04:01:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.099 04:01:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:40.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.099 --rc genhtml_branch_coverage=1 00:05:40.099 --rc genhtml_function_coverage=1 00:05:40.099 --rc genhtml_legend=1 00:05:40.099 --rc geninfo_all_blocks=1 00:05:40.099 --rc geninfo_unexecuted_blocks=1 00:05:40.099 00:05:40.099 ' 00:05:40.099 04:01:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:40.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.099 --rc genhtml_branch_coverage=1 00:05:40.099 --rc genhtml_function_coverage=1 00:05:40.099 --rc genhtml_legend=1 00:05:40.099 --rc geninfo_all_blocks=1 00:05:40.099 --rc geninfo_unexecuted_blocks=1 00:05:40.099 00:05:40.099 ' 00:05:40.099 04:01:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:40.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.099 --rc genhtml_branch_coverage=1 00:05:40.099 --rc genhtml_function_coverage=1 00:05:40.099 --rc genhtml_legend=1 00:05:40.099 --rc geninfo_all_blocks=1 00:05:40.099 --rc geninfo_unexecuted_blocks=1 00:05:40.099 00:05:40.099 ' 00:05:40.099 04:01:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:40.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.099 --rc genhtml_branch_coverage=1 00:05:40.099 --rc genhtml_function_coverage=1 00:05:40.099 --rc genhtml_legend=1 00:05:40.099 --rc geninfo_all_blocks=1 00:05:40.099 --rc geninfo_unexecuted_blocks=1 00:05:40.099 00:05:40.099 ' 00:05:40.099 04:01:41 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:40.099 04:01:41 -- bdev/nbd_common.sh@6 -- # set -e 00:05:40.099 04:01:41 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:40.099 04:01:41 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:40.099 04:01:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.099 04:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:40.099 ************************************ 00:05:40.099 START TEST event_perf 00:05:40.099 ************************************ 00:05:40.099 04:01:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:40.099 Running I/O for 1 seconds...[2024-11-26 04:01:41.857330] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:40.099 [2024-11-26 04:01:41.857563] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68631 ] 00:05:40.357 [2024-11-26 04:01:41.995324] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:40.357 [2024-11-26 04:01:42.051373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.357 [2024-11-26 04:01:42.051519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.357 [2024-11-26 04:01:42.051653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.357 [2024-11-26 04:01:42.051654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.733 Running I/O for 1 seconds... 00:05:41.733 lcore 0: 143204 00:05:41.733 lcore 1: 143204 00:05:41.733 lcore 2: 143205 00:05:41.733 lcore 3: 143204 00:05:41.733 done. 00:05:41.733 ************************************ 00:05:41.733 END TEST event_perf 00:05:41.733 ************************************ 00:05:41.733 00:05:41.733 real 0m1.292s 00:05:41.733 user 0m4.107s 00:05:41.733 sys 0m0.067s 00:05:41.733 04:01:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.733 04:01:43 -- common/autotest_common.sh@10 -- # set +x 00:05:41.733 04:01:43 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:41.733 04:01:43 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:41.733 04:01:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.733 04:01:43 -- common/autotest_common.sh@10 -- # set +x 00:05:41.733 ************************************ 00:05:41.733 START TEST event_reactor 00:05:41.733 ************************************ 00:05:41.733 04:01:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:41.733 [2024-11-26 04:01:43.207008] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:41.733 [2024-11-26 04:01:43.207291] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68664 ] 00:05:41.733 [2024-11-26 04:01:43.339207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.733 [2024-11-26 04:01:43.408177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.109 test_start 00:05:43.109 oneshot 00:05:43.109 tick 100 00:05:43.109 tick 100 00:05:43.109 tick 250 00:05:43.109 tick 100 00:05:43.109 tick 100 00:05:43.109 tick 250 00:05:43.109 tick 500 00:05:43.109 tick 100 00:05:43.109 tick 100 00:05:43.109 tick 100 00:05:43.109 tick 250 00:05:43.109 tick 100 00:05:43.109 tick 100 00:05:43.109 test_end 00:05:43.109 00:05:43.109 real 0m1.287s 00:05:43.109 user 0m1.127s 00:05:43.109 sys 0m0.054s 00:05:43.109 ************************************ 00:05:43.109 END TEST event_reactor 00:05:43.109 ************************************ 00:05:43.109 04:01:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.109 04:01:44 -- common/autotest_common.sh@10 -- # set +x 00:05:43.109 04:01:44 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:43.109 04:01:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:43.109 04:01:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.109 04:01:44 -- common/autotest_common.sh@10 -- # set +x 00:05:43.109 ************************************ 00:05:43.109 START TEST event_reactor_perf 00:05:43.109 ************************************ 00:05:43.109 04:01:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:43.109 [2024-11-26 04:01:44.548450] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:43.109 [2024-11-26 04:01:44.548540] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68705 ] 00:05:43.109 [2024-11-26 04:01:44.683779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.109 [2024-11-26 04:01:44.750057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.487 test_start 00:05:44.487 test_end 00:05:44.487 Performance: 468151 events per second 00:05:44.487 00:05:44.487 real 0m1.289s 00:05:44.487 user 0m1.125s 00:05:44.487 sys 0m0.059s 00:05:44.487 04:01:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.487 ************************************ 00:05:44.487 END TEST event_reactor_perf 00:05:44.487 ************************************ 00:05:44.487 04:01:45 -- common/autotest_common.sh@10 -- # set +x 00:05:44.487 04:01:45 -- event/event.sh@49 -- # uname -s 00:05:44.487 04:01:45 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:44.487 04:01:45 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:44.487 04:01:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.487 04:01:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.487 04:01:45 -- common/autotest_common.sh@10 -- # set +x 00:05:44.487 ************************************ 00:05:44.487 START TEST event_scheduler 00:05:44.487 ************************************ 00:05:44.487 04:01:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:44.487 * Looking for test storage... 00:05:44.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:44.487 04:01:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:44.487 04:01:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:44.487 04:01:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:44.487 04:01:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:44.487 04:01:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:44.487 04:01:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:44.487 04:01:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:44.487 04:01:46 -- scripts/common.sh@335 -- # IFS=.-: 00:05:44.487 04:01:46 -- scripts/common.sh@335 -- # read -ra ver1 00:05:44.487 04:01:46 -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.487 04:01:46 -- scripts/common.sh@336 -- # read -ra ver2 00:05:44.487 04:01:46 -- scripts/common.sh@337 -- # local 'op=<' 00:05:44.487 04:01:46 -- scripts/common.sh@339 -- # ver1_l=2 00:05:44.487 04:01:46 -- scripts/common.sh@340 -- # ver2_l=1 00:05:44.487 04:01:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:44.487 04:01:46 -- scripts/common.sh@343 -- # case "$op" in 00:05:44.487 04:01:46 -- scripts/common.sh@344 -- # : 1 00:05:44.487 04:01:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:44.487 04:01:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.488 04:01:46 -- scripts/common.sh@364 -- # decimal 1 00:05:44.488 04:01:46 -- scripts/common.sh@352 -- # local d=1 00:05:44.488 04:01:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.488 04:01:46 -- scripts/common.sh@354 -- # echo 1 00:05:44.488 04:01:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:44.488 04:01:46 -- scripts/common.sh@365 -- # decimal 2 00:05:44.488 04:01:46 -- scripts/common.sh@352 -- # local d=2 00:05:44.488 04:01:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.488 04:01:46 -- scripts/common.sh@354 -- # echo 2 00:05:44.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.488 04:01:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:44.488 04:01:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:44.488 04:01:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:44.488 04:01:46 -- scripts/common.sh@367 -- # return 0 00:05:44.488 04:01:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.488 04:01:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:44.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.488 --rc genhtml_branch_coverage=1 00:05:44.488 --rc genhtml_function_coverage=1 00:05:44.488 --rc genhtml_legend=1 00:05:44.488 --rc geninfo_all_blocks=1 00:05:44.488 --rc geninfo_unexecuted_blocks=1 00:05:44.488 00:05:44.488 ' 00:05:44.488 04:01:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:44.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.488 --rc genhtml_branch_coverage=1 00:05:44.488 --rc genhtml_function_coverage=1 00:05:44.488 --rc genhtml_legend=1 00:05:44.488 --rc geninfo_all_blocks=1 00:05:44.488 --rc geninfo_unexecuted_blocks=1 00:05:44.488 00:05:44.488 ' 00:05:44.488 04:01:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:44.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.488 --rc genhtml_branch_coverage=1 00:05:44.488 --rc genhtml_function_coverage=1 00:05:44.488 --rc genhtml_legend=1 00:05:44.488 --rc geninfo_all_blocks=1 00:05:44.488 --rc geninfo_unexecuted_blocks=1 00:05:44.488 00:05:44.488 ' 00:05:44.488 04:01:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:44.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.488 --rc genhtml_branch_coverage=1 00:05:44.488 --rc genhtml_function_coverage=1 00:05:44.488 --rc genhtml_legend=1 00:05:44.488 --rc geninfo_all_blocks=1 00:05:44.488 --rc geninfo_unexecuted_blocks=1 00:05:44.488 00:05:44.488 ' 00:05:44.488 04:01:46 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:44.488 04:01:46 -- scheduler/scheduler.sh@35 -- # scheduler_pid=68768 00:05:44.488 04:01:46 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.488 04:01:46 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:44.488 04:01:46 -- scheduler/scheduler.sh@37 -- # waitforlisten 68768 00:05:44.488 04:01:46 -- common/autotest_common.sh@829 -- # '[' -z 68768 ']' 00:05:44.488 04:01:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.488 04:01:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.488 04:01:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.488 04:01:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.488 04:01:46 -- common/autotest_common.sh@10 -- # set +x 00:05:44.488 [2024-11-26 04:01:46.097694] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:44.488 [2024-11-26 04:01:46.097957] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68768 ] 00:05:44.488 [2024-11-26 04:01:46.234555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:44.747 [2024-11-26 04:01:46.331506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.747 [2024-11-26 04:01:46.331644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.747 [2024-11-26 04:01:46.331790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.747 [2024-11-26 04:01:46.331793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.314 04:01:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.314 04:01:47 -- common/autotest_common.sh@862 -- # return 0 00:05:45.314 04:01:47 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:45.314 04:01:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.314 04:01:47 -- common/autotest_common.sh@10 -- # set +x 00:05:45.574 POWER: Env isn't set yet! 00:05:45.574 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:45.574 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:45.574 POWER: Cannot set governor of lcore 0 to userspace 00:05:45.574 POWER: Attempting to initialise PSTAT power management... 00:05:45.574 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:45.574 POWER: Cannot set governor of lcore 0 to performance 00:05:45.574 POWER: Attempting to initialise AMD PSTATE power management... 00:05:45.574 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:45.574 POWER: Cannot set governor of lcore 0 to userspace 00:05:45.574 POWER: Attempting to initialise CPPC power management... 00:05:45.574 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:45.574 POWER: Cannot set governor of lcore 0 to userspace 00:05:45.574 POWER: Attempting to initialise VM power management... 00:05:45.574 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:45.574 POWER: Unable to set Power Management Environment for lcore 0 00:05:45.574 [2024-11-26 04:01:47.082951] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:45.574 [2024-11-26 04:01:47.082965] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:45.574 [2024-11-26 04:01:47.082974] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:45.574 [2024-11-26 04:01:47.082987] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:45.574 [2024-11-26 04:01:47.082994] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:45.574 [2024-11-26 04:01:47.083001] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:45.574 04:01:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.574 04:01:47 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:45.574 04:01:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.574 04:01:47 -- common/autotest_common.sh@10 -- # set +x 00:05:45.574 [2024-11-26 04:01:47.199007] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:45.574 04:01:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.574 04:01:47 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:45.574 04:01:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.574 04:01:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.574 04:01:47 -- common/autotest_common.sh@10 -- # set +x 00:05:45.574 ************************************ 00:05:45.574 START TEST scheduler_create_thread 00:05:45.574 ************************************ 00:05:45.574 04:01:47 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:45.574 04:01:47 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:45.574 04:01:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.574 04:01:47 -- common/autotest_common.sh@10 -- # set +x 00:05:45.574 2 00:05:45.574 04:01:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.574 04:01:47 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:45.574 04:01:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.574 04:01:47 -- common/autotest_common.sh@10 -- # set +x 00:05:45.574 3 00:05:45.574 04:01:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.574 04:01:47 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:45.574 04:01:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.574 04:01:47 -- common/autotest_common.sh@10 -- # set +x 00:05:45.574 4 00:05:45.574 04:01:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.574 04:01:47 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:45.574 04:01:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.574 04:01:47 -- common/autotest_common.sh@10 -- # set +x 00:05:45.574 5 00:05:45.574 04:01:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.574 04:01:47 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:45.574 04:01:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.574 04:01:47 -- common/autotest_common.sh@10 -- # set +x 00:05:45.574 6 00:05:45.574 04:01:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.574 04:01:47 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:45.574 04:01:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.574 04:01:47 -- common/autotest_common.sh@10 -- # set +x 00:05:45.574 7 00:05:45.574 04:01:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.574 04:01:47 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:45.574 04:01:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.574 04:01:47 -- common/autotest_common.sh@10 -- # set +x 00:05:45.574 8 00:05:45.574 04:01:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.574 04:01:47 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:45.574 04:01:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.574 04:01:47 -- common/autotest_common.sh@10 -- # set +x 00:05:45.574 9 00:05:45.574 04:01:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.574 04:01:47 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:45.574 04:01:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.574 04:01:47 -- common/autotest_common.sh@10 -- # set +x 00:05:45.574 10 00:05:45.574 04:01:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.574 04:01:47 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:45.574 04:01:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.574 04:01:47 -- common/autotest_common.sh@10 -- # set +x 00:05:45.574 04:01:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.574 04:01:47 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:45.574 04:01:47 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:45.574 04:01:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.574 04:01:47 -- common/autotest_common.sh@10 -- # set +x 00:05:45.574 04:01:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.574 04:01:47 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:45.574 04:01:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.574 04:01:47 -- common/autotest_common.sh@10 -- # set +x 00:05:47.477 04:01:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.477 04:01:48 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:47.477 04:01:48 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:47.477 04:01:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.477 04:01:48 -- common/autotest_common.sh@10 -- # set +x 00:05:48.413 04:01:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.413 00:05:48.413 real 0m2.614s 00:05:48.413 user 0m0.019s 00:05:48.413 sys 0m0.005s 00:05:48.413 ************************************ 00:05:48.413 END TEST scheduler_create_thread 00:05:48.413 ************************************ 00:05:48.413 04:01:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:48.413 04:01:49 -- common/autotest_common.sh@10 -- # set +x 00:05:48.413 04:01:49 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:48.413 04:01:49 -- scheduler/scheduler.sh@46 -- # killprocess 68768 00:05:48.413 04:01:49 -- common/autotest_common.sh@936 -- # '[' -z 68768 ']' 00:05:48.413 04:01:49 -- common/autotest_common.sh@940 -- # kill -0 68768 00:05:48.413 04:01:49 -- common/autotest_common.sh@941 -- # uname 00:05:48.413 04:01:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.413 04:01:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68768 00:05:48.413 killing process with pid 68768 00:05:48.413 04:01:49 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:48.413 04:01:49 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:48.413 04:01:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68768' 00:05:48.413 04:01:49 -- common/autotest_common.sh@955 -- # kill 68768 00:05:48.413 04:01:49 -- common/autotest_common.sh@960 -- # wait 68768 00:05:48.671 [2024-11-26 04:01:50.306448] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:48.930 00:05:48.930 real 0m4.642s 00:05:48.930 user 0m8.682s 00:05:48.930 sys 0m0.426s 00:05:48.930 04:01:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:48.931 04:01:50 -- common/autotest_common.sh@10 -- # set +x 00:05:48.931 ************************************ 00:05:48.931 END TEST event_scheduler 00:05:48.931 ************************************ 00:05:48.931 04:01:50 -- event/event.sh@51 -- # modprobe -n nbd 00:05:48.931 04:01:50 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:48.931 04:01:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:48.931 04:01:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.931 04:01:50 -- common/autotest_common.sh@10 -- # set +x 00:05:48.931 ************************************ 00:05:48.931 START TEST app_repeat 00:05:48.931 ************************************ 00:05:48.931 04:01:50 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:48.931 04:01:50 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.931 04:01:50 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.931 04:01:50 -- event/event.sh@13 -- # local nbd_list 00:05:48.931 04:01:50 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.931 04:01:50 -- event/event.sh@14 -- # local bdev_list 00:05:48.931 04:01:50 -- event/event.sh@15 -- # local repeat_times=4 00:05:48.931 04:01:50 -- event/event.sh@17 -- # modprobe nbd 00:05:48.931 Process app_repeat pid: 68891 00:05:48.931 04:01:50 -- event/event.sh@19 -- # repeat_pid=68891 00:05:48.931 04:01:50 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:48.931 04:01:50 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.931 04:01:50 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 68891' 00:05:48.931 spdk_app_start Round 0 00:05:48.931 04:01:50 -- event/event.sh@23 -- # for i in {0..2} 00:05:48.931 04:01:50 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:48.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.931 04:01:50 -- event/event.sh@25 -- # waitforlisten 68891 /var/tmp/spdk-nbd.sock 00:05:48.931 04:01:50 -- common/autotest_common.sh@829 -- # '[' -z 68891 ']' 00:05:48.931 04:01:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.931 04:01:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.931 04:01:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.931 04:01:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.931 04:01:50 -- common/autotest_common.sh@10 -- # set +x 00:05:48.931 [2024-11-26 04:01:50.616747] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:48.931 [2024-11-26 04:01:50.617072] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68891 ] 00:05:49.190 [2024-11-26 04:01:50.755249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.190 [2024-11-26 04:01:50.825194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.190 [2024-11-26 04:01:50.825207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.758 04:01:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.758 04:01:51 -- common/autotest_common.sh@862 -- # return 0 00:05:49.758 04:01:51 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.326 Malloc0 00:05:50.326 04:01:51 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.326 Malloc1 00:05:50.585 04:01:52 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.585 04:01:52 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.585 04:01:52 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.585 04:01:52 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.585 04:01:52 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.585 04:01:52 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.585 04:01:52 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.585 04:01:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.585 04:01:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.585 04:01:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.585 04:01:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.585 04:01:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.585 04:01:52 -- bdev/nbd_common.sh@12 -- # local i 00:05:50.585 04:01:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.585 04:01:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.585 04:01:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.585 /dev/nbd0 00:05:50.585 04:01:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:50.585 04:01:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:50.585 04:01:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:50.585 04:01:52 -- common/autotest_common.sh@867 -- # local i 00:05:50.585 04:01:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:50.585 04:01:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:50.585 04:01:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:50.585 04:01:52 -- common/autotest_common.sh@871 -- # break 00:05:50.585 04:01:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:50.585 04:01:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:50.585 04:01:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.585 1+0 records in 00:05:50.585 1+0 records out 00:05:50.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268499 s, 15.3 MB/s 00:05:50.585 04:01:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.585 04:01:52 -- common/autotest_common.sh@884 -- # size=4096 00:05:50.585 04:01:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.585 04:01:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:50.585 04:01:52 -- common/autotest_common.sh@887 -- # return 0 00:05:50.585 04:01:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.585 04:01:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.585 04:01:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:50.844 /dev/nbd1 00:05:50.844 04:01:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:50.845 04:01:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:50.845 04:01:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:50.845 04:01:52 -- common/autotest_common.sh@867 -- # local i 00:05:50.845 04:01:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:50.845 04:01:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:50.845 04:01:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:50.845 04:01:52 -- common/autotest_common.sh@871 -- # break 00:05:50.845 04:01:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:50.845 04:01:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:50.845 04:01:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.845 1+0 records in 00:05:50.845 1+0 records out 00:05:50.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372246 s, 11.0 MB/s 00:05:50.845 04:01:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.845 04:01:52 -- common/autotest_common.sh@884 -- # size=4096 00:05:50.845 04:01:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.845 04:01:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:50.845 04:01:52 -- common/autotest_common.sh@887 -- # return 0 00:05:50.845 04:01:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.845 04:01:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.845 04:01:52 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.845 04:01:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.845 04:01:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.415 04:01:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.415 { 00:05:51.415 "bdev_name": "Malloc0", 00:05:51.415 "nbd_device": "/dev/nbd0" 00:05:51.415 }, 00:05:51.415 { 00:05:51.415 "bdev_name": "Malloc1", 00:05:51.415 "nbd_device": "/dev/nbd1" 00:05:51.415 } 00:05:51.415 ]' 00:05:51.415 04:01:52 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.415 { 00:05:51.415 "bdev_name": "Malloc0", 00:05:51.415 "nbd_device": "/dev/nbd0" 00:05:51.415 }, 00:05:51.415 { 00:05:51.415 "bdev_name": "Malloc1", 00:05:51.415 "nbd_device": "/dev/nbd1" 00:05:51.415 } 00:05:51.415 ]' 00:05:51.415 04:01:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.415 04:01:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.415 /dev/nbd1' 00:05:51.415 04:01:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.415 04:01:52 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.415 /dev/nbd1' 00:05:51.415 04:01:52 -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.415 04:01:52 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.415 04:01:52 -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.415 04:01:52 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.415 04:01:52 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.415 04:01:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.415 04:01:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.415 04:01:52 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.415 04:01:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.415 04:01:52 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.415 04:01:52 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.415 256+0 records in 00:05:51.415 256+0 records out 00:05:51.415 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00758399 s, 138 MB/s 00:05:51.415 04:01:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.415 04:01:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.415 256+0 records in 00:05:51.415 256+0 records out 00:05:51.415 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260182 s, 40.3 MB/s 00:05:51.415 04:01:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.416 04:01:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.416 256+0 records in 00:05:51.416 256+0 records out 00:05:51.416 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274974 s, 38.1 MB/s 00:05:51.416 04:01:52 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.416 04:01:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.416 04:01:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.416 04:01:52 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.416 04:01:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.416 04:01:52 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.416 04:01:52 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.416 04:01:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.416 04:01:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.416 04:01:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.416 04:01:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.416 04:01:53 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.416 04:01:53 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.416 04:01:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.416 04:01:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.416 04:01:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.416 04:01:53 -- bdev/nbd_common.sh@51 -- # local i 00:05:51.416 04:01:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.416 04:01:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.689 04:01:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.689 04:01:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.689 04:01:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.689 04:01:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.689 04:01:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.689 04:01:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.689 04:01:53 -- bdev/nbd_common.sh@41 -- # break 00:05:51.689 04:01:53 -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.689 04:01:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.689 04:01:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:51.968 04:01:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:51.968 04:01:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:51.968 04:01:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:51.968 04:01:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.968 04:01:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.968 04:01:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:51.968 04:01:53 -- bdev/nbd_common.sh@41 -- # break 00:05:51.968 04:01:53 -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.968 04:01:53 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.968 04:01:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.968 04:01:53 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.235 04:01:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.235 04:01:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.235 04:01:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.235 04:01:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.235 04:01:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.235 04:01:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.235 04:01:53 -- bdev/nbd_common.sh@65 -- # true 00:05:52.235 04:01:53 -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.235 04:01:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.235 04:01:53 -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.235 04:01:53 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.235 04:01:53 -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.235 04:01:53 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:52.494 04:01:54 -- event/event.sh@35 -- # sleep 3 00:05:52.753 [2024-11-26 04:01:54.447496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.753 [2024-11-26 04:01:54.505646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.753 [2024-11-26 04:01:54.505661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.012 [2024-11-26 04:01:54.577180] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:53.012 [2024-11-26 04:01:54.577242] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:55.580 spdk_app_start Round 1 00:05:55.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.580 04:01:57 -- event/event.sh@23 -- # for i in {0..2} 00:05:55.580 04:01:57 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:55.580 04:01:57 -- event/event.sh@25 -- # waitforlisten 68891 /var/tmp/spdk-nbd.sock 00:05:55.580 04:01:57 -- common/autotest_common.sh@829 -- # '[' -z 68891 ']' 00:05:55.580 04:01:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.580 04:01:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.580 04:01:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.580 04:01:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.580 04:01:57 -- common/autotest_common.sh@10 -- # set +x 00:05:55.838 04:01:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.838 04:01:57 -- common/autotest_common.sh@862 -- # return 0 00:05:55.838 04:01:57 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.096 Malloc0 00:05:56.096 04:01:57 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.355 Malloc1 00:05:56.355 04:01:58 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.355 04:01:58 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.355 04:01:58 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.355 04:01:58 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:56.355 04:01:58 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.355 04:01:58 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:56.355 04:01:58 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.355 04:01:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.355 04:01:58 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.355 04:01:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:56.355 04:01:58 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.355 04:01:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:56.355 04:01:58 -- bdev/nbd_common.sh@12 -- # local i 00:05:56.355 04:01:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:56.355 04:01:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.355 04:01:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:56.615 /dev/nbd0 00:05:56.615 04:01:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:56.615 04:01:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:56.615 04:01:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:56.615 04:01:58 -- common/autotest_common.sh@867 -- # local i 00:05:56.615 04:01:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:56.615 04:01:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:56.615 04:01:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:56.615 04:01:58 -- common/autotest_common.sh@871 -- # break 00:05:56.615 04:01:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:56.615 04:01:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:56.615 04:01:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.615 1+0 records in 00:05:56.615 1+0 records out 00:05:56.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269507 s, 15.2 MB/s 00:05:56.615 04:01:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.615 04:01:58 -- common/autotest_common.sh@884 -- # size=4096 00:05:56.615 04:01:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.615 04:01:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:56.615 04:01:58 -- common/autotest_common.sh@887 -- # return 0 00:05:56.615 04:01:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.615 04:01:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.615 04:01:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.874 /dev/nbd1 00:05:56.874 04:01:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.874 04:01:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.874 04:01:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:56.874 04:01:58 -- common/autotest_common.sh@867 -- # local i 00:05:56.874 04:01:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:56.874 04:01:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:56.874 04:01:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:56.874 04:01:58 -- common/autotest_common.sh@871 -- # break 00:05:56.874 04:01:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:56.874 04:01:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:56.874 04:01:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.874 1+0 records in 00:05:56.874 1+0 records out 00:05:56.874 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340388 s, 12.0 MB/s 00:05:56.874 04:01:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.874 04:01:58 -- common/autotest_common.sh@884 -- # size=4096 00:05:56.874 04:01:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.874 04:01:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:56.874 04:01:58 -- common/autotest_common.sh@887 -- # return 0 00:05:56.874 04:01:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.874 04:01:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.874 04:01:58 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.874 04:01:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.874 04:01:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.132 04:01:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:57.132 { 00:05:57.132 "bdev_name": "Malloc0", 00:05:57.132 "nbd_device": "/dev/nbd0" 00:05:57.132 }, 00:05:57.132 { 00:05:57.132 "bdev_name": "Malloc1", 00:05:57.132 "nbd_device": "/dev/nbd1" 00:05:57.132 } 00:05:57.132 ]' 00:05:57.132 04:01:58 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:57.132 { 00:05:57.132 "bdev_name": "Malloc0", 00:05:57.132 "nbd_device": "/dev/nbd0" 00:05:57.132 }, 00:05:57.132 { 00:05:57.132 "bdev_name": "Malloc1", 00:05:57.132 "nbd_device": "/dev/nbd1" 00:05:57.132 } 00:05:57.132 ]' 00:05:57.133 04:01:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.133 04:01:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:57.133 /dev/nbd1' 00:05:57.133 04:01:58 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:57.133 /dev/nbd1' 00:05:57.133 04:01:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.133 04:01:58 -- bdev/nbd_common.sh@65 -- # count=2 00:05:57.133 04:01:58 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:57.133 04:01:58 -- bdev/nbd_common.sh@95 -- # count=2 00:05:57.133 04:01:58 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:57.133 04:01:58 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:57.133 04:01:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.133 04:01:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.133 04:01:58 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:57.133 04:01:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:57.133 04:01:58 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:57.133 04:01:58 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:57.133 256+0 records in 00:05:57.133 256+0 records out 00:05:57.133 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0097068 s, 108 MB/s 00:05:57.133 04:01:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.133 04:01:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:57.392 256+0 records in 00:05:57.392 256+0 records out 00:05:57.392 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228097 s, 46.0 MB/s 00:05:57.392 04:01:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.392 04:01:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:57.392 256+0 records in 00:05:57.392 256+0 records out 00:05:57.392 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247928 s, 42.3 MB/s 00:05:57.392 04:01:58 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:57.392 04:01:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.392 04:01:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.392 04:01:58 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:57.392 04:01:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:57.392 04:01:58 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:57.392 04:01:58 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:57.392 04:01:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.392 04:01:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:57.392 04:01:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.392 04:01:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:57.392 04:01:58 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:57.392 04:01:58 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:57.392 04:01:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.392 04:01:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.392 04:01:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:57.392 04:01:58 -- bdev/nbd_common.sh@51 -- # local i 00:05:57.392 04:01:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.392 04:01:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:57.652 04:01:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:57.652 04:01:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:57.652 04:01:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:57.652 04:01:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.652 04:01:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.652 04:01:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:57.652 04:01:59 -- bdev/nbd_common.sh@41 -- # break 00:05:57.652 04:01:59 -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.652 04:01:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.652 04:01:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:57.910 04:01:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:57.910 04:01:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:57.910 04:01:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:57.910 04:01:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.910 04:01:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.910 04:01:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.910 04:01:59 -- bdev/nbd_common.sh@41 -- # break 00:05:57.911 04:01:59 -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.911 04:01:59 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.911 04:01:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.911 04:01:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.168 04:01:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:58.168 04:01:59 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:58.168 04:01:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.168 04:01:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:58.168 04:01:59 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:58.168 04:01:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.168 04:01:59 -- bdev/nbd_common.sh@65 -- # true 00:05:58.168 04:01:59 -- bdev/nbd_common.sh@65 -- # count=0 00:05:58.168 04:01:59 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:58.168 04:01:59 -- bdev/nbd_common.sh@104 -- # count=0 00:05:58.168 04:01:59 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:58.168 04:01:59 -- bdev/nbd_common.sh@109 -- # return 0 00:05:58.168 04:01:59 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:58.426 04:02:00 -- event/event.sh@35 -- # sleep 3 00:05:58.685 [2024-11-26 04:02:00.375186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.685 [2024-11-26 04:02:00.425701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.685 [2024-11-26 04:02:00.425736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.943 [2024-11-26 04:02:00.496370] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:58.943 [2024-11-26 04:02:00.496434] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:01.474 spdk_app_start Round 2 00:06:01.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.474 04:02:03 -- event/event.sh@23 -- # for i in {0..2} 00:06:01.474 04:02:03 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:01.474 04:02:03 -- event/event.sh@25 -- # waitforlisten 68891 /var/tmp/spdk-nbd.sock 00:06:01.474 04:02:03 -- common/autotest_common.sh@829 -- # '[' -z 68891 ']' 00:06:01.474 04:02:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.474 04:02:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.474 04:02:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.474 04:02:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.474 04:02:03 -- common/autotest_common.sh@10 -- # set +x 00:06:01.733 04:02:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.733 04:02:03 -- common/autotest_common.sh@862 -- # return 0 00:06:01.733 04:02:03 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.992 Malloc0 00:06:01.992 04:02:03 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.250 Malloc1 00:06:02.250 04:02:03 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.250 04:02:03 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.250 04:02:03 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.250 04:02:03 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:02.250 04:02:03 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.250 04:02:03 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:02.250 04:02:03 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.250 04:02:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.250 04:02:03 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.251 04:02:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:02.251 04:02:03 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.251 04:02:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:02.251 04:02:03 -- bdev/nbd_common.sh@12 -- # local i 00:06:02.251 04:02:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:02.251 04:02:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.251 04:02:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:02.509 /dev/nbd0 00:06:02.509 04:02:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:02.509 04:02:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:02.509 04:02:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:02.509 04:02:04 -- common/autotest_common.sh@867 -- # local i 00:06:02.509 04:02:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:02.509 04:02:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:02.509 04:02:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:02.509 04:02:04 -- common/autotest_common.sh@871 -- # break 00:06:02.509 04:02:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:02.509 04:02:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:02.509 04:02:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.509 1+0 records in 00:06:02.509 1+0 records out 00:06:02.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245182 s, 16.7 MB/s 00:06:02.509 04:02:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.509 04:02:04 -- common/autotest_common.sh@884 -- # size=4096 00:06:02.509 04:02:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.509 04:02:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:02.509 04:02:04 -- common/autotest_common.sh@887 -- # return 0 00:06:02.509 04:02:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.509 04:02:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.509 04:02:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.768 /dev/nbd1 00:06:02.768 04:02:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.768 04:02:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.768 04:02:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:02.768 04:02:04 -- common/autotest_common.sh@867 -- # local i 00:06:02.768 04:02:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:02.768 04:02:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:02.768 04:02:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:02.768 04:02:04 -- common/autotest_common.sh@871 -- # break 00:06:02.768 04:02:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:02.768 04:02:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:02.768 04:02:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.768 1+0 records in 00:06:02.768 1+0 records out 00:06:02.768 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336548 s, 12.2 MB/s 00:06:02.768 04:02:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.768 04:02:04 -- common/autotest_common.sh@884 -- # size=4096 00:06:02.768 04:02:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.768 04:02:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:02.768 04:02:04 -- common/autotest_common.sh@887 -- # return 0 00:06:02.768 04:02:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.768 04:02:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.768 04:02:04 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.768 04:02:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.768 04:02:04 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.027 04:02:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:03.027 { 00:06:03.027 "bdev_name": "Malloc0", 00:06:03.027 "nbd_device": "/dev/nbd0" 00:06:03.027 }, 00:06:03.027 { 00:06:03.027 "bdev_name": "Malloc1", 00:06:03.027 "nbd_device": "/dev/nbd1" 00:06:03.027 } 00:06:03.027 ]' 00:06:03.027 04:02:04 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:03.027 { 00:06:03.027 "bdev_name": "Malloc0", 00:06:03.027 "nbd_device": "/dev/nbd0" 00:06:03.027 }, 00:06:03.027 { 00:06:03.027 "bdev_name": "Malloc1", 00:06:03.027 "nbd_device": "/dev/nbd1" 00:06:03.027 } 00:06:03.027 ]' 00:06:03.027 04:02:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.027 04:02:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:03.027 /dev/nbd1' 00:06:03.027 04:02:04 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:03.027 /dev/nbd1' 00:06:03.027 04:02:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.027 04:02:04 -- bdev/nbd_common.sh@65 -- # count=2 00:06:03.027 04:02:04 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:03.027 04:02:04 -- bdev/nbd_common.sh@95 -- # count=2 00:06:03.027 04:02:04 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:03.027 04:02:04 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:03.027 04:02:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.027 04:02:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.027 04:02:04 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:03.027 04:02:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.027 04:02:04 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:03.027 04:02:04 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:03.027 256+0 records in 00:06:03.027 256+0 records out 00:06:03.027 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00975171 s, 108 MB/s 00:06:03.027 04:02:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.027 04:02:04 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:03.285 256+0 records in 00:06:03.285 256+0 records out 00:06:03.286 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242897 s, 43.2 MB/s 00:06:03.286 04:02:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.286 04:02:04 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:03.286 256+0 records in 00:06:03.286 256+0 records out 00:06:03.286 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263166 s, 39.8 MB/s 00:06:03.286 04:02:04 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:03.286 04:02:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.286 04:02:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.286 04:02:04 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:03.286 04:02:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.286 04:02:04 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:03.286 04:02:04 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:03.286 04:02:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.286 04:02:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:03.286 04:02:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.286 04:02:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:03.286 04:02:04 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.286 04:02:04 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:03.286 04:02:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.286 04:02:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.286 04:02:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.286 04:02:04 -- bdev/nbd_common.sh@51 -- # local i 00:06:03.286 04:02:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.286 04:02:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.286 04:02:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.286 04:02:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.286 04:02:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.286 04:02:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.286 04:02:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.286 04:02:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.544 04:02:05 -- bdev/nbd_common.sh@41 -- # break 00:06:03.544 04:02:05 -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.544 04:02:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.544 04:02:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.803 04:02:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.803 04:02:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.803 04:02:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.803 04:02:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.803 04:02:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.803 04:02:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.803 04:02:05 -- bdev/nbd_common.sh@41 -- # break 00:06:03.803 04:02:05 -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.803 04:02:05 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.803 04:02:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.803 04:02:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.062 04:02:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.062 04:02:05 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.062 04:02:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.062 04:02:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.062 04:02:05 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.062 04:02:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.062 04:02:05 -- bdev/nbd_common.sh@65 -- # true 00:06:04.062 04:02:05 -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.062 04:02:05 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.062 04:02:05 -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.062 04:02:05 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.062 04:02:05 -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.062 04:02:05 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:04.321 04:02:05 -- event/event.sh@35 -- # sleep 3 00:06:04.579 [2024-11-26 04:02:06.112345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.579 [2024-11-26 04:02:06.164652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.580 [2024-11-26 04:02:06.164669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.580 [2024-11-26 04:02:06.234328] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:04.580 [2024-11-26 04:02:06.234394] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.865 04:02:08 -- event/event.sh@38 -- # waitforlisten 68891 /var/tmp/spdk-nbd.sock 00:06:07.865 04:02:08 -- common/autotest_common.sh@829 -- # '[' -z 68891 ']' 00:06:07.865 04:02:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.865 04:02:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.865 04:02:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.865 04:02:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.865 04:02:08 -- common/autotest_common.sh@10 -- # set +x 00:06:07.865 04:02:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.865 04:02:09 -- common/autotest_common.sh@862 -- # return 0 00:06:07.865 04:02:09 -- event/event.sh@39 -- # killprocess 68891 00:06:07.865 04:02:09 -- common/autotest_common.sh@936 -- # '[' -z 68891 ']' 00:06:07.865 04:02:09 -- common/autotest_common.sh@940 -- # kill -0 68891 00:06:07.865 04:02:09 -- common/autotest_common.sh@941 -- # uname 00:06:07.865 04:02:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:07.865 04:02:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68891 00:06:07.865 killing process with pid 68891 00:06:07.865 04:02:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:07.865 04:02:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:07.865 04:02:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68891' 00:06:07.865 04:02:09 -- common/autotest_common.sh@955 -- # kill 68891 00:06:07.865 04:02:09 -- common/autotest_common.sh@960 -- # wait 68891 00:06:07.865 spdk_app_start is called in Round 0. 00:06:07.865 Shutdown signal received, stop current app iteration 00:06:07.865 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:07.865 spdk_app_start is called in Round 1. 00:06:07.865 Shutdown signal received, stop current app iteration 00:06:07.865 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:07.865 spdk_app_start is called in Round 2. 00:06:07.865 Shutdown signal received, stop current app iteration 00:06:07.865 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:07.865 spdk_app_start is called in Round 3. 00:06:07.865 Shutdown signal received, stop current app iteration 00:06:07.865 04:02:09 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:07.865 04:02:09 -- event/event.sh@42 -- # return 0 00:06:07.865 00:06:07.865 real 0m18.828s 00:06:07.865 user 0m42.091s 00:06:07.865 sys 0m2.959s 00:06:07.865 04:02:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.865 04:02:09 -- common/autotest_common.sh@10 -- # set +x 00:06:07.865 ************************************ 00:06:07.865 END TEST app_repeat 00:06:07.865 ************************************ 00:06:07.865 04:02:09 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:07.865 04:02:09 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:07.865 04:02:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.865 04:02:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.865 04:02:09 -- common/autotest_common.sh@10 -- # set +x 00:06:07.865 ************************************ 00:06:07.865 START TEST cpu_locks 00:06:07.866 ************************************ 00:06:07.866 04:02:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:07.866 * Looking for test storage... 00:06:07.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:07.866 04:02:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:07.866 04:02:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:07.866 04:02:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:07.866 04:02:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:07.866 04:02:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:07.866 04:02:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:07.866 04:02:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:07.866 04:02:09 -- scripts/common.sh@335 -- # IFS=.-: 00:06:07.866 04:02:09 -- scripts/common.sh@335 -- # read -ra ver1 00:06:07.866 04:02:09 -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.866 04:02:09 -- scripts/common.sh@336 -- # read -ra ver2 00:06:07.866 04:02:09 -- scripts/common.sh@337 -- # local 'op=<' 00:06:07.866 04:02:09 -- scripts/common.sh@339 -- # ver1_l=2 00:06:07.866 04:02:09 -- scripts/common.sh@340 -- # ver2_l=1 00:06:07.866 04:02:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:07.866 04:02:09 -- scripts/common.sh@343 -- # case "$op" in 00:06:07.866 04:02:09 -- scripts/common.sh@344 -- # : 1 00:06:07.866 04:02:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:07.866 04:02:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.866 04:02:09 -- scripts/common.sh@364 -- # decimal 1 00:06:07.866 04:02:09 -- scripts/common.sh@352 -- # local d=1 00:06:07.866 04:02:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.866 04:02:09 -- scripts/common.sh@354 -- # echo 1 00:06:07.866 04:02:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:08.125 04:02:09 -- scripts/common.sh@365 -- # decimal 2 00:06:08.125 04:02:09 -- scripts/common.sh@352 -- # local d=2 00:06:08.125 04:02:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.125 04:02:09 -- scripts/common.sh@354 -- # echo 2 00:06:08.125 04:02:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:08.125 04:02:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:08.125 04:02:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:08.125 04:02:09 -- scripts/common.sh@367 -- # return 0 00:06:08.125 04:02:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.125 04:02:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:08.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.125 --rc genhtml_branch_coverage=1 00:06:08.125 --rc genhtml_function_coverage=1 00:06:08.125 --rc genhtml_legend=1 00:06:08.125 --rc geninfo_all_blocks=1 00:06:08.125 --rc geninfo_unexecuted_blocks=1 00:06:08.125 00:06:08.125 ' 00:06:08.125 04:02:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:08.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.125 --rc genhtml_branch_coverage=1 00:06:08.125 --rc genhtml_function_coverage=1 00:06:08.125 --rc genhtml_legend=1 00:06:08.125 --rc geninfo_all_blocks=1 00:06:08.125 --rc geninfo_unexecuted_blocks=1 00:06:08.125 00:06:08.125 ' 00:06:08.125 04:02:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:08.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.125 --rc genhtml_branch_coverage=1 00:06:08.125 --rc genhtml_function_coverage=1 00:06:08.125 --rc genhtml_legend=1 00:06:08.125 --rc geninfo_all_blocks=1 00:06:08.126 --rc geninfo_unexecuted_blocks=1 00:06:08.126 00:06:08.126 ' 00:06:08.126 04:02:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:08.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.126 --rc genhtml_branch_coverage=1 00:06:08.126 --rc genhtml_function_coverage=1 00:06:08.126 --rc genhtml_legend=1 00:06:08.126 --rc geninfo_all_blocks=1 00:06:08.126 --rc geninfo_unexecuted_blocks=1 00:06:08.126 00:06:08.126 ' 00:06:08.126 04:02:09 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:08.126 04:02:09 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:08.126 04:02:09 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:08.126 04:02:09 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:08.126 04:02:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.126 04:02:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.126 04:02:09 -- common/autotest_common.sh@10 -- # set +x 00:06:08.126 ************************************ 00:06:08.126 START TEST default_locks 00:06:08.126 ************************************ 00:06:08.126 04:02:09 -- common/autotest_common.sh@1114 -- # default_locks 00:06:08.126 04:02:09 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69517 00:06:08.126 04:02:09 -- event/cpu_locks.sh@47 -- # waitforlisten 69517 00:06:08.126 04:02:09 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.126 04:02:09 -- common/autotest_common.sh@829 -- # '[' -z 69517 ']' 00:06:08.126 04:02:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.126 04:02:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.126 04:02:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.126 04:02:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.126 04:02:09 -- common/autotest_common.sh@10 -- # set +x 00:06:08.126 [2024-11-26 04:02:09.694898] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.126 [2024-11-26 04:02:09.694998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69517 ] 00:06:08.126 [2024-11-26 04:02:09.829672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.385 [2024-11-26 04:02:09.898934] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:08.385 [2024-11-26 04:02:09.899084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.953 04:02:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.953 04:02:10 -- common/autotest_common.sh@862 -- # return 0 00:06:08.953 04:02:10 -- event/cpu_locks.sh@49 -- # locks_exist 69517 00:06:08.953 04:02:10 -- event/cpu_locks.sh@22 -- # lslocks -p 69517 00:06:08.953 04:02:10 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.522 04:02:11 -- event/cpu_locks.sh@50 -- # killprocess 69517 00:06:09.522 04:02:11 -- common/autotest_common.sh@936 -- # '[' -z 69517 ']' 00:06:09.522 04:02:11 -- common/autotest_common.sh@940 -- # kill -0 69517 00:06:09.522 04:02:11 -- common/autotest_common.sh@941 -- # uname 00:06:09.522 04:02:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:09.522 04:02:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69517 00:06:09.522 04:02:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:09.522 04:02:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:09.522 killing process with pid 69517 00:06:09.522 04:02:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69517' 00:06:09.522 04:02:11 -- common/autotest_common.sh@955 -- # kill 69517 00:06:09.522 04:02:11 -- common/autotest_common.sh@960 -- # wait 69517 00:06:10.090 04:02:11 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69517 00:06:10.090 04:02:11 -- common/autotest_common.sh@650 -- # local es=0 00:06:10.090 04:02:11 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69517 00:06:10.090 04:02:11 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:10.090 04:02:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.090 04:02:11 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:10.090 04:02:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.090 04:02:11 -- common/autotest_common.sh@653 -- # waitforlisten 69517 00:06:10.090 04:02:11 -- common/autotest_common.sh@829 -- # '[' -z 69517 ']' 00:06:10.090 04:02:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.090 04:02:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.090 04:02:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.090 04:02:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.090 04:02:11 -- common/autotest_common.sh@10 -- # set +x 00:06:10.090 ERROR: process (pid: 69517) is no longer running 00:06:10.090 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69517) - No such process 00:06:10.090 04:02:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.090 04:02:11 -- common/autotest_common.sh@862 -- # return 1 00:06:10.090 04:02:11 -- common/autotest_common.sh@653 -- # es=1 00:06:10.090 04:02:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.090 04:02:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:10.090 04:02:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.090 04:02:11 -- event/cpu_locks.sh@54 -- # no_locks 00:06:10.090 04:02:11 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:10.090 04:02:11 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:10.090 04:02:11 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:10.090 00:06:10.090 real 0m1.941s 00:06:10.090 user 0m2.004s 00:06:10.090 sys 0m0.609s 00:06:10.090 04:02:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.090 04:02:11 -- common/autotest_common.sh@10 -- # set +x 00:06:10.091 ************************************ 00:06:10.091 END TEST default_locks 00:06:10.091 ************************************ 00:06:10.091 04:02:11 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:10.091 04:02:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:10.091 04:02:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.091 04:02:11 -- common/autotest_common.sh@10 -- # set +x 00:06:10.091 ************************************ 00:06:10.091 START TEST default_locks_via_rpc 00:06:10.091 ************************************ 00:06:10.091 04:02:11 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:10.091 04:02:11 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69581 00:06:10.091 04:02:11 -- event/cpu_locks.sh@63 -- # waitforlisten 69581 00:06:10.091 04:02:11 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.091 04:02:11 -- common/autotest_common.sh@829 -- # '[' -z 69581 ']' 00:06:10.091 04:02:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.091 04:02:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.091 04:02:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.091 04:02:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.091 04:02:11 -- common/autotest_common.sh@10 -- # set +x 00:06:10.091 [2024-11-26 04:02:11.703475] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.091 [2024-11-26 04:02:11.703597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69581 ] 00:06:10.091 [2024-11-26 04:02:11.839917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.348 [2024-11-26 04:02:11.907889] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:10.348 [2024-11-26 04:02:11.908045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.916 04:02:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.916 04:02:12 -- common/autotest_common.sh@862 -- # return 0 00:06:10.916 04:02:12 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:10.916 04:02:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.916 04:02:12 -- common/autotest_common.sh@10 -- # set +x 00:06:10.916 04:02:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.916 04:02:12 -- event/cpu_locks.sh@67 -- # no_locks 00:06:10.916 04:02:12 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:10.916 04:02:12 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:10.916 04:02:12 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:10.916 04:02:12 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:10.916 04:02:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.916 04:02:12 -- common/autotest_common.sh@10 -- # set +x 00:06:10.916 04:02:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.916 04:02:12 -- event/cpu_locks.sh@71 -- # locks_exist 69581 00:06:10.916 04:02:12 -- event/cpu_locks.sh@22 -- # lslocks -p 69581 00:06:10.916 04:02:12 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.484 04:02:13 -- event/cpu_locks.sh@73 -- # killprocess 69581 00:06:11.484 04:02:13 -- common/autotest_common.sh@936 -- # '[' -z 69581 ']' 00:06:11.484 04:02:13 -- common/autotest_common.sh@940 -- # kill -0 69581 00:06:11.484 04:02:13 -- common/autotest_common.sh@941 -- # uname 00:06:11.484 04:02:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:11.484 04:02:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69581 00:06:11.484 04:02:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:11.484 04:02:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:11.484 killing process with pid 69581 00:06:11.484 04:02:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69581' 00:06:11.484 04:02:13 -- common/autotest_common.sh@955 -- # kill 69581 00:06:11.484 04:02:13 -- common/autotest_common.sh@960 -- # wait 69581 00:06:12.053 00:06:12.053 real 0m1.932s 00:06:12.053 user 0m1.966s 00:06:12.053 sys 0m0.624s 00:06:12.053 04:02:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:12.053 04:02:13 -- common/autotest_common.sh@10 -- # set +x 00:06:12.053 ************************************ 00:06:12.053 END TEST default_locks_via_rpc 00:06:12.053 ************************************ 00:06:12.053 04:02:13 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:12.053 04:02:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:12.053 04:02:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.053 04:02:13 -- common/autotest_common.sh@10 -- # set +x 00:06:12.053 ************************************ 00:06:12.053 START TEST non_locking_app_on_locked_coremask 00:06:12.053 ************************************ 00:06:12.053 04:02:13 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:12.053 04:02:13 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69650 00:06:12.053 04:02:13 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.053 04:02:13 -- event/cpu_locks.sh@81 -- # waitforlisten 69650 /var/tmp/spdk.sock 00:06:12.053 04:02:13 -- common/autotest_common.sh@829 -- # '[' -z 69650 ']' 00:06:12.053 04:02:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.053 04:02:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.053 04:02:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.053 04:02:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.053 04:02:13 -- common/autotest_common.sh@10 -- # set +x 00:06:12.053 [2024-11-26 04:02:13.670678] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.053 [2024-11-26 04:02:13.670772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69650 ] 00:06:12.053 [2024-11-26 04:02:13.801625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.312 [2024-11-26 04:02:13.869460] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:12.312 [2024-11-26 04:02:13.869610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.880 04:02:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.880 04:02:14 -- common/autotest_common.sh@862 -- # return 0 00:06:12.880 04:02:14 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69678 00:06:12.880 04:02:14 -- event/cpu_locks.sh@85 -- # waitforlisten 69678 /var/tmp/spdk2.sock 00:06:12.880 04:02:14 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:12.880 04:02:14 -- common/autotest_common.sh@829 -- # '[' -z 69678 ']' 00:06:12.880 04:02:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.880 04:02:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.880 04:02:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.880 04:02:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.880 04:02:14 -- common/autotest_common.sh@10 -- # set +x 00:06:13.139 [2024-11-26 04:02:14.693119] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:13.139 [2024-11-26 04:02:14.693225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69678 ] 00:06:13.139 [2024-11-26 04:02:14.831466] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.139 [2024-11-26 04:02:14.831507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.398 [2024-11-26 04:02:14.975522] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:13.398 [2024-11-26 04:02:14.975669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.966 04:02:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.966 04:02:15 -- common/autotest_common.sh@862 -- # return 0 00:06:13.966 04:02:15 -- event/cpu_locks.sh@87 -- # locks_exist 69650 00:06:13.966 04:02:15 -- event/cpu_locks.sh@22 -- # lslocks -p 69650 00:06:13.966 04:02:15 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.902 04:02:16 -- event/cpu_locks.sh@89 -- # killprocess 69650 00:06:14.903 04:02:16 -- common/autotest_common.sh@936 -- # '[' -z 69650 ']' 00:06:14.903 04:02:16 -- common/autotest_common.sh@940 -- # kill -0 69650 00:06:14.903 04:02:16 -- common/autotest_common.sh@941 -- # uname 00:06:14.903 04:02:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:14.903 04:02:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69650 00:06:14.903 04:02:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:14.903 04:02:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:14.903 killing process with pid 69650 00:06:14.903 04:02:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69650' 00:06:14.903 04:02:16 -- common/autotest_common.sh@955 -- # kill 69650 00:06:14.903 04:02:16 -- common/autotest_common.sh@960 -- # wait 69650 00:06:15.838 04:02:17 -- event/cpu_locks.sh@90 -- # killprocess 69678 00:06:15.838 04:02:17 -- common/autotest_common.sh@936 -- # '[' -z 69678 ']' 00:06:15.838 04:02:17 -- common/autotest_common.sh@940 -- # kill -0 69678 00:06:15.838 04:02:17 -- common/autotest_common.sh@941 -- # uname 00:06:15.838 04:02:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:15.838 04:02:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69678 00:06:15.838 04:02:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:15.838 04:02:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:15.838 killing process with pid 69678 00:06:15.838 04:02:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69678' 00:06:15.838 04:02:17 -- common/autotest_common.sh@955 -- # kill 69678 00:06:15.838 04:02:17 -- common/autotest_common.sh@960 -- # wait 69678 00:06:16.405 00:06:16.405 real 0m4.253s 00:06:16.405 user 0m4.522s 00:06:16.405 sys 0m1.245s 00:06:16.405 04:02:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.405 04:02:17 -- common/autotest_common.sh@10 -- # set +x 00:06:16.405 ************************************ 00:06:16.405 END TEST non_locking_app_on_locked_coremask 00:06:16.405 ************************************ 00:06:16.405 04:02:17 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:16.405 04:02:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:16.405 04:02:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.405 04:02:17 -- common/autotest_common.sh@10 -- # set +x 00:06:16.405 ************************************ 00:06:16.406 START TEST locking_app_on_unlocked_coremask 00:06:16.406 ************************************ 00:06:16.406 04:02:17 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:16.406 04:02:17 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69763 00:06:16.406 04:02:17 -- event/cpu_locks.sh@99 -- # waitforlisten 69763 /var/tmp/spdk.sock 00:06:16.406 04:02:17 -- common/autotest_common.sh@829 -- # '[' -z 69763 ']' 00:06:16.406 04:02:17 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:16.406 04:02:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.406 04:02:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.406 04:02:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.406 04:02:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.406 04:02:17 -- common/autotest_common.sh@10 -- # set +x 00:06:16.406 [2024-11-26 04:02:18.000493] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.406 [2024-11-26 04:02:18.000592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69763 ] 00:06:16.406 [2024-11-26 04:02:18.141836] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.406 [2024-11-26 04:02:18.141868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.664 [2024-11-26 04:02:18.208307] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:16.664 [2024-11-26 04:02:18.208465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.232 04:02:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.232 04:02:18 -- common/autotest_common.sh@862 -- # return 0 00:06:17.232 04:02:18 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69785 00:06:17.232 04:02:18 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:17.232 04:02:18 -- event/cpu_locks.sh@103 -- # waitforlisten 69785 /var/tmp/spdk2.sock 00:06:17.232 04:02:18 -- common/autotest_common.sh@829 -- # '[' -z 69785 ']' 00:06:17.232 04:02:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.232 04:02:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.232 04:02:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.232 04:02:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.232 04:02:18 -- common/autotest_common.sh@10 -- # set +x 00:06:17.232 [2024-11-26 04:02:18.957444] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.232 [2024-11-26 04:02:18.957556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69785 ] 00:06:17.490 [2024-11-26 04:02:19.099873] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.490 [2024-11-26 04:02:19.229149] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:17.490 [2024-11-26 04:02:19.229298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.424 04:02:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.424 04:02:19 -- common/autotest_common.sh@862 -- # return 0 00:06:18.424 04:02:19 -- event/cpu_locks.sh@105 -- # locks_exist 69785 00:06:18.424 04:02:19 -- event/cpu_locks.sh@22 -- # lslocks -p 69785 00:06:18.424 04:02:19 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.990 04:02:20 -- event/cpu_locks.sh@107 -- # killprocess 69763 00:06:18.990 04:02:20 -- common/autotest_common.sh@936 -- # '[' -z 69763 ']' 00:06:18.990 04:02:20 -- common/autotest_common.sh@940 -- # kill -0 69763 00:06:18.990 04:02:20 -- common/autotest_common.sh@941 -- # uname 00:06:18.990 04:02:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:18.990 04:02:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69763 00:06:18.990 04:02:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:18.990 04:02:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:18.990 killing process with pid 69763 00:06:18.990 04:02:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69763' 00:06:18.990 04:02:20 -- common/autotest_common.sh@955 -- # kill 69763 00:06:18.990 04:02:20 -- common/autotest_common.sh@960 -- # wait 69763 00:06:20.371 04:02:21 -- event/cpu_locks.sh@108 -- # killprocess 69785 00:06:20.371 04:02:21 -- common/autotest_common.sh@936 -- # '[' -z 69785 ']' 00:06:20.371 04:02:21 -- common/autotest_common.sh@940 -- # kill -0 69785 00:06:20.371 04:02:21 -- common/autotest_common.sh@941 -- # uname 00:06:20.371 04:02:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:20.371 04:02:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69785 00:06:20.371 04:02:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:20.371 04:02:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:20.371 killing process with pid 69785 00:06:20.371 04:02:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69785' 00:06:20.371 04:02:21 -- common/autotest_common.sh@955 -- # kill 69785 00:06:20.371 04:02:21 -- common/autotest_common.sh@960 -- # wait 69785 00:06:20.647 00:06:20.647 real 0m4.290s 00:06:20.647 user 0m4.487s 00:06:20.647 sys 0m1.263s 00:06:20.647 04:02:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.647 04:02:22 -- common/autotest_common.sh@10 -- # set +x 00:06:20.647 ************************************ 00:06:20.647 END TEST locking_app_on_unlocked_coremask 00:06:20.647 ************************************ 00:06:20.647 04:02:22 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:20.647 04:02:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:20.647 04:02:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.647 04:02:22 -- common/autotest_common.sh@10 -- # set +x 00:06:20.647 ************************************ 00:06:20.647 START TEST locking_app_on_locked_coremask 00:06:20.647 ************************************ 00:06:20.647 04:02:22 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:20.647 04:02:22 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69870 00:06:20.647 04:02:22 -- event/cpu_locks.sh@116 -- # waitforlisten 69870 /var/tmp/spdk.sock 00:06:20.648 04:02:22 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.648 04:02:22 -- common/autotest_common.sh@829 -- # '[' -z 69870 ']' 00:06:20.648 04:02:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.648 04:02:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.648 04:02:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.648 04:02:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.648 04:02:22 -- common/autotest_common.sh@10 -- # set +x 00:06:20.648 [2024-11-26 04:02:22.346388] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.648 [2024-11-26 04:02:22.346490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69870 ] 00:06:20.928 [2024-11-26 04:02:22.482785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.928 [2024-11-26 04:02:22.550965] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:20.928 [2024-11-26 04:02:22.551120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.875 04:02:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.875 04:02:23 -- common/autotest_common.sh@862 -- # return 0 00:06:21.875 04:02:23 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69898 00:06:21.875 04:02:23 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69898 /var/tmp/spdk2.sock 00:06:21.875 04:02:23 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:21.875 04:02:23 -- common/autotest_common.sh@650 -- # local es=0 00:06:21.875 04:02:23 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69898 /var/tmp/spdk2.sock 00:06:21.875 04:02:23 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:21.875 04:02:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.875 04:02:23 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:21.875 04:02:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.875 04:02:23 -- common/autotest_common.sh@653 -- # waitforlisten 69898 /var/tmp/spdk2.sock 00:06:21.875 04:02:23 -- common/autotest_common.sh@829 -- # '[' -z 69898 ']' 00:06:21.875 04:02:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.875 04:02:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.875 04:02:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.875 04:02:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.875 04:02:23 -- common/autotest_common.sh@10 -- # set +x 00:06:21.875 [2024-11-26 04:02:23.369815] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.875 [2024-11-26 04:02:23.369953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69898 ] 00:06:21.875 [2024-11-26 04:02:23.508078] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69870 has claimed it. 00:06:21.875 [2024-11-26 04:02:23.508131] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:22.443 ERROR: process (pid: 69898) is no longer running 00:06:22.443 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69898) - No such process 00:06:22.443 04:02:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.443 04:02:24 -- common/autotest_common.sh@862 -- # return 1 00:06:22.443 04:02:24 -- common/autotest_common.sh@653 -- # es=1 00:06:22.443 04:02:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:22.443 04:02:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:22.443 04:02:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:22.443 04:02:24 -- event/cpu_locks.sh@122 -- # locks_exist 69870 00:06:22.443 04:02:24 -- event/cpu_locks.sh@22 -- # lslocks -p 69870 00:06:22.443 04:02:24 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.011 04:02:24 -- event/cpu_locks.sh@124 -- # killprocess 69870 00:06:23.011 04:02:24 -- common/autotest_common.sh@936 -- # '[' -z 69870 ']' 00:06:23.011 04:02:24 -- common/autotest_common.sh@940 -- # kill -0 69870 00:06:23.011 04:02:24 -- common/autotest_common.sh@941 -- # uname 00:06:23.011 04:02:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:23.011 04:02:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69870 00:06:23.011 04:02:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:23.011 04:02:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:23.011 killing process with pid 69870 00:06:23.011 04:02:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69870' 00:06:23.011 04:02:24 -- common/autotest_common.sh@955 -- # kill 69870 00:06:23.011 04:02:24 -- common/autotest_common.sh@960 -- # wait 69870 00:06:23.580 00:06:23.580 real 0m2.757s 00:06:23.580 user 0m3.094s 00:06:23.580 sys 0m0.748s 00:06:23.580 04:02:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.580 04:02:25 -- common/autotest_common.sh@10 -- # set +x 00:06:23.580 ************************************ 00:06:23.580 END TEST locking_app_on_locked_coremask 00:06:23.580 ************************************ 00:06:23.580 04:02:25 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:23.580 04:02:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:23.580 04:02:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.580 04:02:25 -- common/autotest_common.sh@10 -- # set +x 00:06:23.580 ************************************ 00:06:23.580 START TEST locking_overlapped_coremask 00:06:23.580 ************************************ 00:06:23.580 04:02:25 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:23.580 04:02:25 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=69955 00:06:23.580 04:02:25 -- event/cpu_locks.sh@133 -- # waitforlisten 69955 /var/tmp/spdk.sock 00:06:23.580 04:02:25 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:23.580 04:02:25 -- common/autotest_common.sh@829 -- # '[' -z 69955 ']' 00:06:23.580 04:02:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.580 04:02:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.580 04:02:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.580 04:02:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.580 04:02:25 -- common/autotest_common.sh@10 -- # set +x 00:06:23.580 [2024-11-26 04:02:25.152990] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:23.580 [2024-11-26 04:02:25.153100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69955 ] 00:06:23.580 [2024-11-26 04:02:25.292402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.838 [2024-11-26 04:02:25.355539] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:23.838 [2024-11-26 04:02:25.355842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.838 [2024-11-26 04:02:25.355944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.838 [2024-11-26 04:02:25.355950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.405 04:02:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.405 04:02:26 -- common/autotest_common.sh@862 -- # return 0 00:06:24.405 04:02:26 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=69985 00:06:24.405 04:02:26 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 69985 /var/tmp/spdk2.sock 00:06:24.406 04:02:26 -- common/autotest_common.sh@650 -- # local es=0 00:06:24.406 04:02:26 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69985 /var/tmp/spdk2.sock 00:06:24.406 04:02:26 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:24.406 04:02:26 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:24.406 04:02:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.406 04:02:26 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:24.406 04:02:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.406 04:02:26 -- common/autotest_common.sh@653 -- # waitforlisten 69985 /var/tmp/spdk2.sock 00:06:24.406 04:02:26 -- common/autotest_common.sh@829 -- # '[' -z 69985 ']' 00:06:24.406 04:02:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.406 04:02:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.406 04:02:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.406 04:02:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.406 04:02:26 -- common/autotest_common.sh@10 -- # set +x 00:06:24.665 [2024-11-26 04:02:26.176519] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.665 [2024-11-26 04:02:26.176641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69985 ] 00:06:24.666 [2024-11-26 04:02:26.319297] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69955 has claimed it. 00:06:24.666 [2024-11-26 04:02:26.319382] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:25.235 ERROR: process (pid: 69985) is no longer running 00:06:25.235 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69985) - No such process 00:06:25.235 04:02:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.235 04:02:26 -- common/autotest_common.sh@862 -- # return 1 00:06:25.235 04:02:26 -- common/autotest_common.sh@653 -- # es=1 00:06:25.235 04:02:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:25.235 04:02:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:25.235 04:02:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:25.235 04:02:26 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:25.235 04:02:26 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:25.235 04:02:26 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:25.235 04:02:26 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:25.235 04:02:26 -- event/cpu_locks.sh@141 -- # killprocess 69955 00:06:25.235 04:02:26 -- common/autotest_common.sh@936 -- # '[' -z 69955 ']' 00:06:25.235 04:02:26 -- common/autotest_common.sh@940 -- # kill -0 69955 00:06:25.235 04:02:26 -- common/autotest_common.sh@941 -- # uname 00:06:25.235 04:02:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:25.235 04:02:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69955 00:06:25.235 04:02:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:25.235 killing process with pid 69955 00:06:25.235 04:02:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:25.235 04:02:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69955' 00:06:25.235 04:02:26 -- common/autotest_common.sh@955 -- # kill 69955 00:06:25.235 04:02:26 -- common/autotest_common.sh@960 -- # wait 69955 00:06:25.803 00:06:25.803 real 0m2.338s 00:06:25.803 user 0m6.589s 00:06:25.803 sys 0m0.500s 00:06:25.803 ************************************ 00:06:25.803 END TEST locking_overlapped_coremask 00:06:25.803 ************************************ 00:06:25.803 04:02:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.803 04:02:27 -- common/autotest_common.sh@10 -- # set +x 00:06:25.803 04:02:27 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:25.803 04:02:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.803 04:02:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.803 04:02:27 -- common/autotest_common.sh@10 -- # set +x 00:06:25.803 ************************************ 00:06:25.803 START TEST locking_overlapped_coremask_via_rpc 00:06:25.803 ************************************ 00:06:25.803 04:02:27 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:25.803 04:02:27 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=70031 00:06:25.803 04:02:27 -- event/cpu_locks.sh@149 -- # waitforlisten 70031 /var/tmp/spdk.sock 00:06:25.803 04:02:27 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:25.803 04:02:27 -- common/autotest_common.sh@829 -- # '[' -z 70031 ']' 00:06:25.803 04:02:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.803 04:02:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.803 04:02:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.803 04:02:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.803 04:02:27 -- common/autotest_common.sh@10 -- # set +x 00:06:25.803 [2024-11-26 04:02:27.535868] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.803 [2024-11-26 04:02:27.535986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70031 ] 00:06:26.063 [2024-11-26 04:02:27.667636] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:26.063 [2024-11-26 04:02:27.667682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.063 [2024-11-26 04:02:27.726582] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:26.063 [2024-11-26 04:02:27.726884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.063 [2024-11-26 04:02:27.727032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.063 [2024-11-26 04:02:27.727040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.000 04:02:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.000 04:02:28 -- common/autotest_common.sh@862 -- # return 0 00:06:27.000 04:02:28 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70061 00:06:27.000 04:02:28 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:27.000 04:02:28 -- event/cpu_locks.sh@153 -- # waitforlisten 70061 /var/tmp/spdk2.sock 00:06:27.000 04:02:28 -- common/autotest_common.sh@829 -- # '[' -z 70061 ']' 00:06:27.000 04:02:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.000 04:02:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.000 04:02:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.000 04:02:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.000 04:02:28 -- common/autotest_common.sh@10 -- # set +x 00:06:27.000 [2024-11-26 04:02:28.575275] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.000 [2024-11-26 04:02:28.575398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70061 ] 00:06:27.000 [2024-11-26 04:02:28.721903] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:27.000 [2024-11-26 04:02:28.721955] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:27.258 [2024-11-26 04:02:28.837637] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:27.258 [2024-11-26 04:02:28.838524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.258 [2024-11-26 04:02:28.841809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:27.258 [2024-11-26 04:02:28.841813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.825 04:02:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.825 04:02:29 -- common/autotest_common.sh@862 -- # return 0 00:06:27.825 04:02:29 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:27.825 04:02:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.825 04:02:29 -- common/autotest_common.sh@10 -- # set +x 00:06:27.825 04:02:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.825 04:02:29 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:27.825 04:02:29 -- common/autotest_common.sh@650 -- # local es=0 00:06:27.825 04:02:29 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:27.825 04:02:29 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:27.825 04:02:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.825 04:02:29 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:27.825 04:02:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.825 04:02:29 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:27.825 04:02:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.825 04:02:29 -- common/autotest_common.sh@10 -- # set +x 00:06:27.825 [2024-11-26 04:02:29.543905] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70031 has claimed it. 00:06:27.825 2024/11/26 04:02:29 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:27.825 request: 00:06:27.825 { 00:06:27.825 "method": "framework_enable_cpumask_locks", 00:06:27.826 "params": {} 00:06:27.826 } 00:06:27.826 Got JSON-RPC error response 00:06:27.826 GoRPCClient: error on JSON-RPC call 00:06:27.826 04:02:29 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:27.826 04:02:29 -- common/autotest_common.sh@653 -- # es=1 00:06:27.826 04:02:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:27.826 04:02:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:27.826 04:02:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:27.826 04:02:29 -- event/cpu_locks.sh@158 -- # waitforlisten 70031 /var/tmp/spdk.sock 00:06:27.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.826 04:02:29 -- common/autotest_common.sh@829 -- # '[' -z 70031 ']' 00:06:27.826 04:02:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.826 04:02:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.826 04:02:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.826 04:02:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.826 04:02:29 -- common/autotest_common.sh@10 -- # set +x 00:06:28.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.084 04:02:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.084 04:02:29 -- common/autotest_common.sh@862 -- # return 0 00:06:28.084 04:02:29 -- event/cpu_locks.sh@159 -- # waitforlisten 70061 /var/tmp/spdk2.sock 00:06:28.084 04:02:29 -- common/autotest_common.sh@829 -- # '[' -z 70061 ']' 00:06:28.084 04:02:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.084 04:02:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.084 04:02:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.084 04:02:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.084 04:02:29 -- common/autotest_common.sh@10 -- # set +x 00:06:28.342 ************************************ 00:06:28.343 END TEST locking_overlapped_coremask_via_rpc 00:06:28.343 ************************************ 00:06:28.343 04:02:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.343 04:02:30 -- common/autotest_common.sh@862 -- # return 0 00:06:28.343 04:02:30 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:28.343 04:02:30 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:28.343 04:02:30 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:28.343 04:02:30 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:28.343 00:06:28.343 real 0m2.583s 00:06:28.343 user 0m1.284s 00:06:28.343 sys 0m0.235s 00:06:28.343 04:02:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.343 04:02:30 -- common/autotest_common.sh@10 -- # set +x 00:06:28.601 04:02:30 -- event/cpu_locks.sh@174 -- # cleanup 00:06:28.601 04:02:30 -- event/cpu_locks.sh@15 -- # [[ -z 70031 ]] 00:06:28.601 04:02:30 -- event/cpu_locks.sh@15 -- # killprocess 70031 00:06:28.601 04:02:30 -- common/autotest_common.sh@936 -- # '[' -z 70031 ']' 00:06:28.601 04:02:30 -- common/autotest_common.sh@940 -- # kill -0 70031 00:06:28.601 04:02:30 -- common/autotest_common.sh@941 -- # uname 00:06:28.601 04:02:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:28.601 04:02:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70031 00:06:28.601 killing process with pid 70031 00:06:28.601 04:02:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:28.601 04:02:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:28.601 04:02:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70031' 00:06:28.601 04:02:30 -- common/autotest_common.sh@955 -- # kill 70031 00:06:28.601 04:02:30 -- common/autotest_common.sh@960 -- # wait 70031 00:06:29.169 04:02:30 -- event/cpu_locks.sh@16 -- # [[ -z 70061 ]] 00:06:29.169 04:02:30 -- event/cpu_locks.sh@16 -- # killprocess 70061 00:06:29.169 04:02:30 -- common/autotest_common.sh@936 -- # '[' -z 70061 ']' 00:06:29.169 04:02:30 -- common/autotest_common.sh@940 -- # kill -0 70061 00:06:29.169 04:02:30 -- common/autotest_common.sh@941 -- # uname 00:06:29.169 04:02:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:29.169 04:02:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70061 00:06:29.169 killing process with pid 70061 00:06:29.169 04:02:30 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:29.169 04:02:30 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:29.169 04:02:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70061' 00:06:29.169 04:02:30 -- common/autotest_common.sh@955 -- # kill 70061 00:06:29.169 04:02:30 -- common/autotest_common.sh@960 -- # wait 70061 00:06:29.428 04:02:31 -- event/cpu_locks.sh@18 -- # rm -f 00:06:29.428 04:02:31 -- event/cpu_locks.sh@1 -- # cleanup 00:06:29.428 04:02:31 -- event/cpu_locks.sh@15 -- # [[ -z 70031 ]] 00:06:29.428 04:02:31 -- event/cpu_locks.sh@15 -- # killprocess 70031 00:06:29.428 04:02:31 -- common/autotest_common.sh@936 -- # '[' -z 70031 ']' 00:06:29.428 04:02:31 -- common/autotest_common.sh@940 -- # kill -0 70031 00:06:29.428 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70031) - No such process 00:06:29.428 Process with pid 70031 is not found 00:06:29.428 04:02:31 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70031 is not found' 00:06:29.428 04:02:31 -- event/cpu_locks.sh@16 -- # [[ -z 70061 ]] 00:06:29.428 04:02:31 -- event/cpu_locks.sh@16 -- # killprocess 70061 00:06:29.428 04:02:31 -- common/autotest_common.sh@936 -- # '[' -z 70061 ']' 00:06:29.428 04:02:31 -- common/autotest_common.sh@940 -- # kill -0 70061 00:06:29.428 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70061) - No such process 00:06:29.428 Process with pid 70061 is not found 00:06:29.428 04:02:31 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70061 is not found' 00:06:29.428 04:02:31 -- event/cpu_locks.sh@18 -- # rm -f 00:06:29.428 00:06:29.428 real 0m21.556s 00:06:29.428 user 0m36.956s 00:06:29.428 sys 0m6.162s 00:06:29.428 04:02:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.428 04:02:31 -- common/autotest_common.sh@10 -- # set +x 00:06:29.428 ************************************ 00:06:29.428 END TEST cpu_locks 00:06:29.428 ************************************ 00:06:29.428 00:06:29.428 real 0m49.431s 00:06:29.428 user 1m34.290s 00:06:29.428 sys 0m10.025s 00:06:29.428 04:02:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.428 04:02:31 -- common/autotest_common.sh@10 -- # set +x 00:06:29.428 ************************************ 00:06:29.428 END TEST event 00:06:29.428 ************************************ 00:06:29.428 04:02:31 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:29.428 04:02:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.428 04:02:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.428 04:02:31 -- common/autotest_common.sh@10 -- # set +x 00:06:29.428 ************************************ 00:06:29.428 START TEST thread 00:06:29.428 ************************************ 00:06:29.428 04:02:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:29.428 * Looking for test storage... 00:06:29.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:29.688 04:02:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:29.688 04:02:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:29.688 04:02:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:29.688 04:02:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:29.688 04:02:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:29.688 04:02:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:29.688 04:02:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:29.688 04:02:31 -- scripts/common.sh@335 -- # IFS=.-: 00:06:29.688 04:02:31 -- scripts/common.sh@335 -- # read -ra ver1 00:06:29.688 04:02:31 -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.688 04:02:31 -- scripts/common.sh@336 -- # read -ra ver2 00:06:29.688 04:02:31 -- scripts/common.sh@337 -- # local 'op=<' 00:06:29.688 04:02:31 -- scripts/common.sh@339 -- # ver1_l=2 00:06:29.688 04:02:31 -- scripts/common.sh@340 -- # ver2_l=1 00:06:29.688 04:02:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:29.688 04:02:31 -- scripts/common.sh@343 -- # case "$op" in 00:06:29.688 04:02:31 -- scripts/common.sh@344 -- # : 1 00:06:29.688 04:02:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:29.688 04:02:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.688 04:02:31 -- scripts/common.sh@364 -- # decimal 1 00:06:29.688 04:02:31 -- scripts/common.sh@352 -- # local d=1 00:06:29.688 04:02:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.688 04:02:31 -- scripts/common.sh@354 -- # echo 1 00:06:29.688 04:02:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:29.688 04:02:31 -- scripts/common.sh@365 -- # decimal 2 00:06:29.688 04:02:31 -- scripts/common.sh@352 -- # local d=2 00:06:29.688 04:02:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.688 04:02:31 -- scripts/common.sh@354 -- # echo 2 00:06:29.688 04:02:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:29.688 04:02:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:29.688 04:02:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:29.688 04:02:31 -- scripts/common.sh@367 -- # return 0 00:06:29.688 04:02:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.688 04:02:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:29.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.688 --rc genhtml_branch_coverage=1 00:06:29.688 --rc genhtml_function_coverage=1 00:06:29.688 --rc genhtml_legend=1 00:06:29.688 --rc geninfo_all_blocks=1 00:06:29.688 --rc geninfo_unexecuted_blocks=1 00:06:29.688 00:06:29.688 ' 00:06:29.688 04:02:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:29.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.688 --rc genhtml_branch_coverage=1 00:06:29.688 --rc genhtml_function_coverage=1 00:06:29.688 --rc genhtml_legend=1 00:06:29.688 --rc geninfo_all_blocks=1 00:06:29.688 --rc geninfo_unexecuted_blocks=1 00:06:29.688 00:06:29.688 ' 00:06:29.688 04:02:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:29.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.688 --rc genhtml_branch_coverage=1 00:06:29.688 --rc genhtml_function_coverage=1 00:06:29.688 --rc genhtml_legend=1 00:06:29.688 --rc geninfo_all_blocks=1 00:06:29.688 --rc geninfo_unexecuted_blocks=1 00:06:29.688 00:06:29.688 ' 00:06:29.688 04:02:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:29.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.688 --rc genhtml_branch_coverage=1 00:06:29.688 --rc genhtml_function_coverage=1 00:06:29.688 --rc genhtml_legend=1 00:06:29.688 --rc geninfo_all_blocks=1 00:06:29.688 --rc geninfo_unexecuted_blocks=1 00:06:29.688 00:06:29.688 ' 00:06:29.688 04:02:31 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:29.688 04:02:31 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:29.688 04:02:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.688 04:02:31 -- common/autotest_common.sh@10 -- # set +x 00:06:29.688 ************************************ 00:06:29.688 START TEST thread_poller_perf 00:06:29.688 ************************************ 00:06:29.688 04:02:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:29.688 [2024-11-26 04:02:31.300945] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.688 [2024-11-26 04:02:31.301052] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70220 ] 00:06:29.688 [2024-11-26 04:02:31.437485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.947 [2024-11-26 04:02:31.516886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.947 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:30.884 [2024-11-26T04:02:32.652Z] ====================================== 00:06:30.884 [2024-11-26T04:02:32.652Z] busy:2206738664 (cyc) 00:06:30.884 [2024-11-26T04:02:32.652Z] total_run_count: 389000 00:06:30.884 [2024-11-26T04:02:32.652Z] tsc_hz: 2200000000 (cyc) 00:06:30.884 [2024-11-26T04:02:32.652Z] ====================================== 00:06:30.884 [2024-11-26T04:02:32.652Z] poller_cost: 5672 (cyc), 2578 (nsec) 00:06:30.884 00:06:30.884 real 0m1.311s 00:06:30.884 user 0m1.130s 00:06:30.884 sys 0m0.072s 00:06:30.884 04:02:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.884 04:02:32 -- common/autotest_common.sh@10 -- # set +x 00:06:30.884 ************************************ 00:06:30.884 END TEST thread_poller_perf 00:06:30.884 ************************************ 00:06:30.884 04:02:32 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:30.884 04:02:32 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:30.884 04:02:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.884 04:02:32 -- common/autotest_common.sh@10 -- # set +x 00:06:30.884 ************************************ 00:06:30.884 START TEST thread_poller_perf 00:06:30.884 ************************************ 00:06:30.884 04:02:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:31.144 [2024-11-26 04:02:32.659973] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.144 [2024-11-26 04:02:32.660075] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70250 ] 00:06:31.144 [2024-11-26 04:02:32.797963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.144 [2024-11-26 04:02:32.878913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.144 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:32.518 [2024-11-26T04:02:34.286Z] ====================================== 00:06:32.518 [2024-11-26T04:02:34.286Z] busy:2202688570 (cyc) 00:06:32.518 [2024-11-26T04:02:34.286Z] total_run_count: 5358000 00:06:32.518 [2024-11-26T04:02:34.286Z] tsc_hz: 2200000000 (cyc) 00:06:32.518 [2024-11-26T04:02:34.286Z] ====================================== 00:06:32.518 [2024-11-26T04:02:34.286Z] poller_cost: 411 (cyc), 186 (nsec) 00:06:32.518 00:06:32.518 real 0m1.307s 00:06:32.518 user 0m1.128s 00:06:32.518 sys 0m0.070s 00:06:32.518 04:02:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.518 ************************************ 00:06:32.518 END TEST thread_poller_perf 00:06:32.518 ************************************ 00:06:32.518 04:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:32.518 04:02:33 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:32.518 00:06:32.518 real 0m2.868s 00:06:32.518 user 0m2.382s 00:06:32.518 sys 0m0.270s 00:06:32.518 04:02:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.518 04:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:32.518 ************************************ 00:06:32.518 END TEST thread 00:06:32.518 ************************************ 00:06:32.518 04:02:34 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:32.518 04:02:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:32.518 04:02:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.518 04:02:34 -- common/autotest_common.sh@10 -- # set +x 00:06:32.518 ************************************ 00:06:32.518 START TEST accel 00:06:32.518 ************************************ 00:06:32.518 04:02:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:32.518 * Looking for test storage... 00:06:32.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:32.518 04:02:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:32.518 04:02:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:32.518 04:02:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:32.518 04:02:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:32.518 04:02:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:32.518 04:02:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:32.518 04:02:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:32.518 04:02:34 -- scripts/common.sh@335 -- # IFS=.-: 00:06:32.518 04:02:34 -- scripts/common.sh@335 -- # read -ra ver1 00:06:32.518 04:02:34 -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.518 04:02:34 -- scripts/common.sh@336 -- # read -ra ver2 00:06:32.518 04:02:34 -- scripts/common.sh@337 -- # local 'op=<' 00:06:32.518 04:02:34 -- scripts/common.sh@339 -- # ver1_l=2 00:06:32.519 04:02:34 -- scripts/common.sh@340 -- # ver2_l=1 00:06:32.519 04:02:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:32.519 04:02:34 -- scripts/common.sh@343 -- # case "$op" in 00:06:32.519 04:02:34 -- scripts/common.sh@344 -- # : 1 00:06:32.519 04:02:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:32.519 04:02:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.519 04:02:34 -- scripts/common.sh@364 -- # decimal 1 00:06:32.519 04:02:34 -- scripts/common.sh@352 -- # local d=1 00:06:32.519 04:02:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.519 04:02:34 -- scripts/common.sh@354 -- # echo 1 00:06:32.519 04:02:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:32.519 04:02:34 -- scripts/common.sh@365 -- # decimal 2 00:06:32.519 04:02:34 -- scripts/common.sh@352 -- # local d=2 00:06:32.519 04:02:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.519 04:02:34 -- scripts/common.sh@354 -- # echo 2 00:06:32.519 04:02:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:32.519 04:02:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:32.519 04:02:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:32.519 04:02:34 -- scripts/common.sh@367 -- # return 0 00:06:32.519 04:02:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.519 04:02:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:32.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.519 --rc genhtml_branch_coverage=1 00:06:32.519 --rc genhtml_function_coverage=1 00:06:32.519 --rc genhtml_legend=1 00:06:32.519 --rc geninfo_all_blocks=1 00:06:32.519 --rc geninfo_unexecuted_blocks=1 00:06:32.519 00:06:32.519 ' 00:06:32.519 04:02:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:32.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.519 --rc genhtml_branch_coverage=1 00:06:32.519 --rc genhtml_function_coverage=1 00:06:32.519 --rc genhtml_legend=1 00:06:32.519 --rc geninfo_all_blocks=1 00:06:32.519 --rc geninfo_unexecuted_blocks=1 00:06:32.519 00:06:32.519 ' 00:06:32.519 04:02:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:32.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.519 --rc genhtml_branch_coverage=1 00:06:32.519 --rc genhtml_function_coverage=1 00:06:32.519 --rc genhtml_legend=1 00:06:32.519 --rc geninfo_all_blocks=1 00:06:32.519 --rc geninfo_unexecuted_blocks=1 00:06:32.519 00:06:32.519 ' 00:06:32.519 04:02:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:32.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.519 --rc genhtml_branch_coverage=1 00:06:32.519 --rc genhtml_function_coverage=1 00:06:32.519 --rc genhtml_legend=1 00:06:32.519 --rc geninfo_all_blocks=1 00:06:32.519 --rc geninfo_unexecuted_blocks=1 00:06:32.519 00:06:32.519 ' 00:06:32.519 04:02:34 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:32.519 04:02:34 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:32.519 04:02:34 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:32.519 04:02:34 -- accel/accel.sh@59 -- # spdk_tgt_pid=70332 00:06:32.519 04:02:34 -- accel/accel.sh@60 -- # waitforlisten 70332 00:06:32.519 04:02:34 -- common/autotest_common.sh@829 -- # '[' -z 70332 ']' 00:06:32.519 04:02:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.519 04:02:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.519 04:02:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.519 04:02:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.519 04:02:34 -- common/autotest_common.sh@10 -- # set +x 00:06:32.519 04:02:34 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:32.519 04:02:34 -- accel/accel.sh@58 -- # build_accel_config 00:06:32.519 04:02:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.519 04:02:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.519 04:02:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.519 04:02:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.519 04:02:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.519 04:02:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.519 04:02:34 -- accel/accel.sh@42 -- # jq -r . 00:06:32.778 [2024-11-26 04:02:34.304895] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.778 [2024-11-26 04:02:34.304991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70332 ] 00:06:32.778 [2024-11-26 04:02:34.440641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.778 [2024-11-26 04:02:34.515325] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:32.778 [2024-11-26 04:02:34.515483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.719 04:02:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.719 04:02:35 -- common/autotest_common.sh@862 -- # return 0 00:06:33.719 04:02:35 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:33.719 04:02:35 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:33.719 04:02:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.719 04:02:35 -- common/autotest_common.sh@10 -- # set +x 00:06:33.719 04:02:35 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:33.719 04:02:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.719 04:02:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # IFS== 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:33.719 04:02:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:33.719 04:02:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # IFS== 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:33.719 04:02:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:33.719 04:02:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # IFS== 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:33.719 04:02:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:33.719 04:02:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # IFS== 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:33.719 04:02:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:33.719 04:02:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # IFS== 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:33.719 04:02:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:33.719 04:02:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # IFS== 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:33.719 04:02:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:33.719 04:02:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # IFS== 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:33.719 04:02:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:33.719 04:02:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # IFS== 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:33.719 04:02:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:33.719 04:02:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # IFS== 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:33.719 04:02:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:33.719 04:02:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # IFS== 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:33.719 04:02:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:33.719 04:02:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # IFS== 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:33.719 04:02:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:33.719 04:02:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # IFS== 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:33.719 04:02:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:33.719 04:02:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # IFS== 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:33.719 04:02:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:33.719 04:02:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # IFS== 00:06:33.719 04:02:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:33.719 04:02:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:33.719 04:02:35 -- accel/accel.sh@67 -- # killprocess 70332 00:06:33.719 04:02:35 -- common/autotest_common.sh@936 -- # '[' -z 70332 ']' 00:06:33.719 04:02:35 -- common/autotest_common.sh@940 -- # kill -0 70332 00:06:33.719 04:02:35 -- common/autotest_common.sh@941 -- # uname 00:06:33.719 04:02:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:33.719 04:02:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70332 00:06:33.719 04:02:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:33.719 04:02:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:33.719 04:02:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70332' 00:06:33.719 killing process with pid 70332 00:06:33.719 04:02:35 -- common/autotest_common.sh@955 -- # kill 70332 00:06:33.719 04:02:35 -- common/autotest_common.sh@960 -- # wait 70332 00:06:34.287 04:02:35 -- accel/accel.sh@68 -- # trap - ERR 00:06:34.287 04:02:35 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:34.287 04:02:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:34.287 04:02:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.287 04:02:35 -- common/autotest_common.sh@10 -- # set +x 00:06:34.287 04:02:35 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:34.287 04:02:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:34.287 04:02:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.287 04:02:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.287 04:02:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.287 04:02:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.287 04:02:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.287 04:02:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.287 04:02:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.287 04:02:35 -- accel/accel.sh@42 -- # jq -r . 00:06:34.287 04:02:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.287 04:02:35 -- common/autotest_common.sh@10 -- # set +x 00:06:34.287 04:02:35 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:34.287 04:02:35 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:34.287 04:02:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.287 04:02:35 -- common/autotest_common.sh@10 -- # set +x 00:06:34.287 ************************************ 00:06:34.287 START TEST accel_missing_filename 00:06:34.287 ************************************ 00:06:34.287 04:02:35 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:34.287 04:02:35 -- common/autotest_common.sh@650 -- # local es=0 00:06:34.287 04:02:35 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:34.287 04:02:35 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:34.287 04:02:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.287 04:02:35 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:34.287 04:02:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.287 04:02:35 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:34.287 04:02:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:34.287 04:02:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.287 04:02:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.287 04:02:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.287 04:02:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.287 04:02:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.287 04:02:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.287 04:02:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.287 04:02:35 -- accel/accel.sh@42 -- # jq -r . 00:06:34.287 [2024-11-26 04:02:35.974740] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.287 [2024-11-26 04:02:35.974839] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70401 ] 00:06:34.546 [2024-11-26 04:02:36.115126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.546 [2024-11-26 04:02:36.190734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.546 [2024-11-26 04:02:36.263624] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.804 [2024-11-26 04:02:36.368253] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:34.804 A filename is required. 00:06:34.804 04:02:36 -- common/autotest_common.sh@653 -- # es=234 00:06:34.804 04:02:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:34.804 04:02:36 -- common/autotest_common.sh@662 -- # es=106 00:06:34.804 04:02:36 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:34.804 04:02:36 -- common/autotest_common.sh@670 -- # es=1 00:06:34.804 04:02:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:34.804 00:06:34.804 real 0m0.530s 00:06:34.804 user 0m0.337s 00:06:34.804 sys 0m0.143s 00:06:34.804 04:02:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.804 ************************************ 00:06:34.804 04:02:36 -- common/autotest_common.sh@10 -- # set +x 00:06:34.804 END TEST accel_missing_filename 00:06:34.804 ************************************ 00:06:34.804 04:02:36 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:34.804 04:02:36 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:34.804 04:02:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.804 04:02:36 -- common/autotest_common.sh@10 -- # set +x 00:06:34.804 ************************************ 00:06:34.804 START TEST accel_compress_verify 00:06:34.804 ************************************ 00:06:34.804 04:02:36 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:34.804 04:02:36 -- common/autotest_common.sh@650 -- # local es=0 00:06:34.804 04:02:36 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:34.804 04:02:36 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:34.805 04:02:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.805 04:02:36 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:34.805 04:02:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.805 04:02:36 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:34.805 04:02:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:34.805 04:02:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.805 04:02:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.805 04:02:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.805 04:02:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.805 04:02:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.805 04:02:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.805 04:02:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.805 04:02:36 -- accel/accel.sh@42 -- # jq -r . 00:06:34.805 [2024-11-26 04:02:36.548135] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.805 [2024-11-26 04:02:36.548208] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70433 ] 00:06:35.063 [2024-11-26 04:02:36.678098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.063 [2024-11-26 04:02:36.748643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.063 [2024-11-26 04:02:36.819423] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.321 [2024-11-26 04:02:36.923906] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:35.321 00:06:35.321 Compression does not support the verify option, aborting. 00:06:35.322 04:02:37 -- common/autotest_common.sh@653 -- # es=161 00:06:35.322 04:02:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.322 04:02:37 -- common/autotest_common.sh@662 -- # es=33 00:06:35.322 04:02:37 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:35.322 04:02:37 -- common/autotest_common.sh@670 -- # es=1 00:06:35.322 04:02:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.322 00:06:35.322 real 0m0.505s 00:06:35.322 user 0m0.321s 00:06:35.322 sys 0m0.133s 00:06:35.322 ************************************ 00:06:35.322 END TEST accel_compress_verify 00:06:35.322 04:02:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.322 04:02:37 -- common/autotest_common.sh@10 -- # set +x 00:06:35.322 ************************************ 00:06:35.322 04:02:37 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:35.322 04:02:37 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:35.322 04:02:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.322 04:02:37 -- common/autotest_common.sh@10 -- # set +x 00:06:35.580 ************************************ 00:06:35.580 START TEST accel_wrong_workload 00:06:35.580 ************************************ 00:06:35.580 04:02:37 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:35.580 04:02:37 -- common/autotest_common.sh@650 -- # local es=0 00:06:35.580 04:02:37 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:35.580 04:02:37 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:35.580 04:02:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.580 04:02:37 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:35.580 04:02:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.580 04:02:37 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:35.580 04:02:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:35.580 04:02:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.580 04:02:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.580 04:02:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.580 04:02:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.580 04:02:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.580 04:02:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.580 04:02:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.580 04:02:37 -- accel/accel.sh@42 -- # jq -r . 00:06:35.580 Unsupported workload type: foobar 00:06:35.580 [2024-11-26 04:02:37.110172] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:35.580 accel_perf options: 00:06:35.580 [-h help message] 00:06:35.580 [-q queue depth per core] 00:06:35.580 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:35.580 [-T number of threads per core 00:06:35.580 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:35.580 [-t time in seconds] 00:06:35.581 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:35.581 [ dif_verify, , dif_generate, dif_generate_copy 00:06:35.581 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:35.581 [-l for compress/decompress workloads, name of uncompressed input file 00:06:35.581 [-S for crc32c workload, use this seed value (default 0) 00:06:35.581 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:35.581 [-f for fill workload, use this BYTE value (default 255) 00:06:35.581 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:35.581 [-y verify result if this switch is on] 00:06:35.581 [-a tasks to allocate per core (default: same value as -q)] 00:06:35.581 Can be used to spread operations across a wider range of memory. 00:06:35.581 04:02:37 -- common/autotest_common.sh@653 -- # es=1 00:06:35.581 04:02:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.581 04:02:37 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:35.581 04:02:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.581 00:06:35.581 real 0m0.032s 00:06:35.581 user 0m0.017s 00:06:35.581 sys 0m0.015s 00:06:35.581 ************************************ 00:06:35.581 END TEST accel_wrong_workload 00:06:35.581 ************************************ 00:06:35.581 04:02:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.581 04:02:37 -- common/autotest_common.sh@10 -- # set +x 00:06:35.581 04:02:37 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:35.581 04:02:37 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:35.581 04:02:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.581 04:02:37 -- common/autotest_common.sh@10 -- # set +x 00:06:35.581 ************************************ 00:06:35.581 START TEST accel_negative_buffers 00:06:35.581 ************************************ 00:06:35.581 04:02:37 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:35.581 04:02:37 -- common/autotest_common.sh@650 -- # local es=0 00:06:35.581 04:02:37 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:35.581 04:02:37 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:35.581 04:02:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.581 04:02:37 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:35.581 04:02:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.581 04:02:37 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:35.581 04:02:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:35.581 04:02:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.581 04:02:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.581 04:02:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.581 04:02:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.581 04:02:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.581 04:02:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.581 04:02:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.581 04:02:37 -- accel/accel.sh@42 -- # jq -r . 00:06:35.581 -x option must be non-negative. 00:06:35.581 [2024-11-26 04:02:37.189028] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:35.581 accel_perf options: 00:06:35.581 [-h help message] 00:06:35.581 [-q queue depth per core] 00:06:35.581 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:35.581 [-T number of threads per core 00:06:35.581 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:35.581 [-t time in seconds] 00:06:35.581 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:35.581 [ dif_verify, , dif_generate, dif_generate_copy 00:06:35.581 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:35.581 [-l for compress/decompress workloads, name of uncompressed input file 00:06:35.581 [-S for crc32c workload, use this seed value (default 0) 00:06:35.581 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:35.581 [-f for fill workload, use this BYTE value (default 255) 00:06:35.581 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:35.581 [-y verify result if this switch is on] 00:06:35.581 [-a tasks to allocate per core (default: same value as -q)] 00:06:35.581 Can be used to spread operations across a wider range of memory. 00:06:35.581 04:02:37 -- common/autotest_common.sh@653 -- # es=1 00:06:35.581 04:02:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.581 04:02:37 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:35.581 04:02:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.581 00:06:35.581 real 0m0.029s 00:06:35.581 user 0m0.016s 00:06:35.581 sys 0m0.013s 00:06:35.581 04:02:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.581 04:02:37 -- common/autotest_common.sh@10 -- # set +x 00:06:35.581 ************************************ 00:06:35.581 END TEST accel_negative_buffers 00:06:35.581 ************************************ 00:06:35.581 04:02:37 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:35.581 04:02:37 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:35.581 04:02:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.581 04:02:37 -- common/autotest_common.sh@10 -- # set +x 00:06:35.581 ************************************ 00:06:35.581 START TEST accel_crc32c 00:06:35.581 ************************************ 00:06:35.581 04:02:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:35.581 04:02:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.581 04:02:37 -- accel/accel.sh@17 -- # local accel_module 00:06:35.581 04:02:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:35.581 04:02:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:35.581 04:02:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.581 04:02:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.581 04:02:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.581 04:02:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.581 04:02:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.581 04:02:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.581 04:02:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.581 04:02:37 -- accel/accel.sh@42 -- # jq -r . 00:06:35.581 [2024-11-26 04:02:37.271956] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.581 [2024-11-26 04:02:37.272054] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70486 ] 00:06:35.840 [2024-11-26 04:02:37.410457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.840 [2024-11-26 04:02:37.492725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.219 04:02:38 -- accel/accel.sh@18 -- # out=' 00:06:37.219 SPDK Configuration: 00:06:37.219 Core mask: 0x1 00:06:37.219 00:06:37.219 Accel Perf Configuration: 00:06:37.219 Workload Type: crc32c 00:06:37.219 CRC-32C seed: 32 00:06:37.219 Transfer size: 4096 bytes 00:06:37.219 Vector count 1 00:06:37.219 Module: software 00:06:37.219 Queue depth: 32 00:06:37.219 Allocate depth: 32 00:06:37.219 # threads/core: 1 00:06:37.219 Run time: 1 seconds 00:06:37.219 Verify: Yes 00:06:37.219 00:06:37.219 Running for 1 seconds... 00:06:37.219 00:06:37.219 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:37.219 ------------------------------------------------------------------------------------ 00:06:37.219 0,0 561376/s 2192 MiB/s 0 0 00:06:37.219 ==================================================================================== 00:06:37.219 Total 561376/s 2192 MiB/s 0 0' 00:06:37.219 04:02:38 -- accel/accel.sh@20 -- # IFS=: 00:06:37.219 04:02:38 -- accel/accel.sh@20 -- # read -r var val 00:06:37.219 04:02:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:37.219 04:02:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.219 04:02:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:37.219 04:02:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.219 04:02:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.219 04:02:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.219 04:02:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.219 04:02:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.219 04:02:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.219 04:02:38 -- accel/accel.sh@42 -- # jq -r . 00:06:37.219 [2024-11-26 04:02:38.770144] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.219 [2024-11-26 04:02:38.770242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70511 ] 00:06:37.219 [2024-11-26 04:02:38.898612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.219 [2024-11-26 04:02:38.962897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.479 04:02:39 -- accel/accel.sh@21 -- # val= 00:06:37.479 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.479 04:02:39 -- accel/accel.sh@21 -- # val= 00:06:37.479 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.479 04:02:39 -- accel/accel.sh@21 -- # val=0x1 00:06:37.479 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.479 04:02:39 -- accel/accel.sh@21 -- # val= 00:06:37.479 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.479 04:02:39 -- accel/accel.sh@21 -- # val= 00:06:37.479 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.479 04:02:39 -- accel/accel.sh@21 -- # val=crc32c 00:06:37.479 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.479 04:02:39 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.479 04:02:39 -- accel/accel.sh@21 -- # val=32 00:06:37.479 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.479 04:02:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:37.479 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.479 04:02:39 -- accel/accel.sh@21 -- # val= 00:06:37.479 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.479 04:02:39 -- accel/accel.sh@21 -- # val=software 00:06:37.479 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.479 04:02:39 -- accel/accel.sh@23 -- # accel_module=software 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.479 04:02:39 -- accel/accel.sh@21 -- # val=32 00:06:37.479 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.479 04:02:39 -- accel/accel.sh@21 -- # val=32 00:06:37.479 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.479 04:02:39 -- accel/accel.sh@21 -- # val=1 00:06:37.479 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.479 04:02:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:37.479 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.479 04:02:39 -- accel/accel.sh@21 -- # val=Yes 00:06:37.479 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.479 04:02:39 -- accel/accel.sh@21 -- # val= 00:06:37.479 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.479 04:02:39 -- accel/accel.sh@21 -- # val= 00:06:37.479 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.479 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:38.858 04:02:40 -- accel/accel.sh@21 -- # val= 00:06:38.858 04:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.858 04:02:40 -- accel/accel.sh@20 -- # IFS=: 00:06:38.858 04:02:40 -- accel/accel.sh@20 -- # read -r var val 00:06:38.858 04:02:40 -- accel/accel.sh@21 -- # val= 00:06:38.858 04:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.858 04:02:40 -- accel/accel.sh@20 -- # IFS=: 00:06:38.858 04:02:40 -- accel/accel.sh@20 -- # read -r var val 00:06:38.858 04:02:40 -- accel/accel.sh@21 -- # val= 00:06:38.858 04:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.858 04:02:40 -- accel/accel.sh@20 -- # IFS=: 00:06:38.858 04:02:40 -- accel/accel.sh@20 -- # read -r var val 00:06:38.858 04:02:40 -- accel/accel.sh@21 -- # val= 00:06:38.858 04:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.858 04:02:40 -- accel/accel.sh@20 -- # IFS=: 00:06:38.858 04:02:40 -- accel/accel.sh@20 -- # read -r var val 00:06:38.858 04:02:40 -- accel/accel.sh@21 -- # val= 00:06:38.858 04:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.858 04:02:40 -- accel/accel.sh@20 -- # IFS=: 00:06:38.858 04:02:40 -- accel/accel.sh@20 -- # read -r var val 00:06:38.858 04:02:40 -- accel/accel.sh@21 -- # val= 00:06:38.858 04:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.858 04:02:40 -- accel/accel.sh@20 -- # IFS=: 00:06:38.858 04:02:40 -- accel/accel.sh@20 -- # read -r var val 00:06:38.858 04:02:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:38.858 04:02:40 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:38.858 04:02:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.858 00:06:38.858 real 0m2.970s 00:06:38.858 user 0m2.502s 00:06:38.858 sys 0m0.267s 00:06:38.858 04:02:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:38.858 04:02:40 -- common/autotest_common.sh@10 -- # set +x 00:06:38.858 ************************************ 00:06:38.858 END TEST accel_crc32c 00:06:38.858 ************************************ 00:06:38.858 04:02:40 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:38.858 04:02:40 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:38.858 04:02:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.858 04:02:40 -- common/autotest_common.sh@10 -- # set +x 00:06:38.858 ************************************ 00:06:38.858 START TEST accel_crc32c_C2 00:06:38.858 ************************************ 00:06:38.858 04:02:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:38.858 04:02:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.858 04:02:40 -- accel/accel.sh@17 -- # local accel_module 00:06:38.858 04:02:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:38.858 04:02:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:38.858 04:02:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.858 04:02:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.858 04:02:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.858 04:02:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.858 04:02:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.858 04:02:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.858 04:02:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.858 04:02:40 -- accel/accel.sh@42 -- # jq -r . 00:06:38.858 [2024-11-26 04:02:40.301987] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.858 [2024-11-26 04:02:40.302089] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70540 ] 00:06:38.858 [2024-11-26 04:02:40.439122] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.858 [2024-11-26 04:02:40.514336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.237 04:02:41 -- accel/accel.sh@18 -- # out=' 00:06:40.237 SPDK Configuration: 00:06:40.237 Core mask: 0x1 00:06:40.237 00:06:40.237 Accel Perf Configuration: 00:06:40.237 Workload Type: crc32c 00:06:40.237 CRC-32C seed: 0 00:06:40.237 Transfer size: 4096 bytes 00:06:40.237 Vector count 2 00:06:40.237 Module: software 00:06:40.237 Queue depth: 32 00:06:40.237 Allocate depth: 32 00:06:40.237 # threads/core: 1 00:06:40.237 Run time: 1 seconds 00:06:40.237 Verify: Yes 00:06:40.237 00:06:40.237 Running for 1 seconds... 00:06:40.237 00:06:40.237 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:40.237 ------------------------------------------------------------------------------------ 00:06:40.237 0,0 439584/s 3434 MiB/s 0 0 00:06:40.237 ==================================================================================== 00:06:40.237 Total 439584/s 1717 MiB/s 0 0' 00:06:40.237 04:02:41 -- accel/accel.sh@20 -- # IFS=: 00:06:40.237 04:02:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:40.237 04:02:41 -- accel/accel.sh@20 -- # read -r var val 00:06:40.237 04:02:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:40.237 04:02:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.237 04:02:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.237 04:02:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.237 04:02:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.237 04:02:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.237 04:02:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.237 04:02:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.237 04:02:41 -- accel/accel.sh@42 -- # jq -r . 00:06:40.237 [2024-11-26 04:02:41.790840] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.237 [2024-11-26 04:02:41.790930] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70565 ] 00:06:40.237 [2024-11-26 04:02:41.923848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.237 [2024-11-26 04:02:41.986419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.497 04:02:42 -- accel/accel.sh@21 -- # val= 00:06:40.497 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.497 04:02:42 -- accel/accel.sh@21 -- # val= 00:06:40.497 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.497 04:02:42 -- accel/accel.sh@21 -- # val=0x1 00:06:40.497 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.497 04:02:42 -- accel/accel.sh@21 -- # val= 00:06:40.497 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.497 04:02:42 -- accel/accel.sh@21 -- # val= 00:06:40.497 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.497 04:02:42 -- accel/accel.sh@21 -- # val=crc32c 00:06:40.497 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.497 04:02:42 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.497 04:02:42 -- accel/accel.sh@21 -- # val=0 00:06:40.497 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.497 04:02:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.497 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.497 04:02:42 -- accel/accel.sh@21 -- # val= 00:06:40.497 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.497 04:02:42 -- accel/accel.sh@21 -- # val=software 00:06:40.497 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.497 04:02:42 -- accel/accel.sh@23 -- # accel_module=software 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.497 04:02:42 -- accel/accel.sh@21 -- # val=32 00:06:40.497 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.497 04:02:42 -- accel/accel.sh@21 -- # val=32 00:06:40.497 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.497 04:02:42 -- accel/accel.sh@21 -- # val=1 00:06:40.497 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.497 04:02:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:40.497 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.497 04:02:42 -- accel/accel.sh@21 -- # val=Yes 00:06:40.497 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.497 04:02:42 -- accel/accel.sh@21 -- # val= 00:06:40.497 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.497 04:02:42 -- accel/accel.sh@21 -- # val= 00:06:40.497 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.497 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:41.875 04:02:43 -- accel/accel.sh@21 -- # val= 00:06:41.875 04:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.875 04:02:43 -- accel/accel.sh@20 -- # IFS=: 00:06:41.875 04:02:43 -- accel/accel.sh@20 -- # read -r var val 00:06:41.875 04:02:43 -- accel/accel.sh@21 -- # val= 00:06:41.875 04:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.875 04:02:43 -- accel/accel.sh@20 -- # IFS=: 00:06:41.875 04:02:43 -- accel/accel.sh@20 -- # read -r var val 00:06:41.875 04:02:43 -- accel/accel.sh@21 -- # val= 00:06:41.875 04:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.875 04:02:43 -- accel/accel.sh@20 -- # IFS=: 00:06:41.875 04:02:43 -- accel/accel.sh@20 -- # read -r var val 00:06:41.875 04:02:43 -- accel/accel.sh@21 -- # val= 00:06:41.875 04:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.875 04:02:43 -- accel/accel.sh@20 -- # IFS=: 00:06:41.875 04:02:43 -- accel/accel.sh@20 -- # read -r var val 00:06:41.875 04:02:43 -- accel/accel.sh@21 -- # val= 00:06:41.875 04:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.875 04:02:43 -- accel/accel.sh@20 -- # IFS=: 00:06:41.875 04:02:43 -- accel/accel.sh@20 -- # read -r var val 00:06:41.875 04:02:43 -- accel/accel.sh@21 -- # val= 00:06:41.875 04:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.875 04:02:43 -- accel/accel.sh@20 -- # IFS=: 00:06:41.875 04:02:43 -- accel/accel.sh@20 -- # read -r var val 00:06:41.875 04:02:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:41.875 04:02:43 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:41.875 04:02:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.875 00:06:41.875 real 0m2.965s 00:06:41.875 user 0m2.491s 00:06:41.875 sys 0m0.270s 00:06:41.875 04:02:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.875 04:02:43 -- common/autotest_common.sh@10 -- # set +x 00:06:41.875 ************************************ 00:06:41.875 END TEST accel_crc32c_C2 00:06:41.875 ************************************ 00:06:41.875 04:02:43 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:41.875 04:02:43 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:41.875 04:02:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.875 04:02:43 -- common/autotest_common.sh@10 -- # set +x 00:06:41.875 ************************************ 00:06:41.875 START TEST accel_copy 00:06:41.875 ************************************ 00:06:41.875 04:02:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:41.875 04:02:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.875 04:02:43 -- accel/accel.sh@17 -- # local accel_module 00:06:41.875 04:02:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:41.875 04:02:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:41.875 04:02:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.875 04:02:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.875 04:02:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.875 04:02:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.875 04:02:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.875 04:02:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.875 04:02:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.875 04:02:43 -- accel/accel.sh@42 -- # jq -r . 00:06:41.875 [2024-11-26 04:02:43.315737] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.875 [2024-11-26 04:02:43.315819] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70594 ] 00:06:41.875 [2024-11-26 04:02:43.447138] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.875 [2024-11-26 04:02:43.525519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.254 04:02:44 -- accel/accel.sh@18 -- # out=' 00:06:43.254 SPDK Configuration: 00:06:43.254 Core mask: 0x1 00:06:43.254 00:06:43.254 Accel Perf Configuration: 00:06:43.254 Workload Type: copy 00:06:43.254 Transfer size: 4096 bytes 00:06:43.254 Vector count 1 00:06:43.254 Module: software 00:06:43.254 Queue depth: 32 00:06:43.254 Allocate depth: 32 00:06:43.254 # threads/core: 1 00:06:43.254 Run time: 1 seconds 00:06:43.254 Verify: Yes 00:06:43.254 00:06:43.254 Running for 1 seconds... 00:06:43.254 00:06:43.254 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:43.254 ------------------------------------------------------------------------------------ 00:06:43.254 0,0 392352/s 1532 MiB/s 0 0 00:06:43.254 ==================================================================================== 00:06:43.254 Total 392352/s 1532 MiB/s 0 0' 00:06:43.254 04:02:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:43.254 04:02:44 -- accel/accel.sh@20 -- # IFS=: 00:06:43.254 04:02:44 -- accel/accel.sh@20 -- # read -r var val 00:06:43.254 04:02:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:43.254 04:02:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.254 04:02:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.254 04:02:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.254 04:02:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.254 04:02:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.254 04:02:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.254 04:02:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.254 04:02:44 -- accel/accel.sh@42 -- # jq -r . 00:06:43.254 [2024-11-26 04:02:44.794074] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.254 [2024-11-26 04:02:44.794148] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70619 ] 00:06:43.254 [2024-11-26 04:02:44.921569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.254 [2024-11-26 04:02:44.987414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.513 04:02:45 -- accel/accel.sh@21 -- # val= 00:06:43.513 04:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.513 04:02:45 -- accel/accel.sh@20 -- # IFS=: 00:06:43.513 04:02:45 -- accel/accel.sh@20 -- # read -r var val 00:06:43.513 04:02:45 -- accel/accel.sh@21 -- # val= 00:06:43.513 04:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.513 04:02:45 -- accel/accel.sh@20 -- # IFS=: 00:06:43.513 04:02:45 -- accel/accel.sh@20 -- # read -r var val 00:06:43.513 04:02:45 -- accel/accel.sh@21 -- # val=0x1 00:06:43.513 04:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.513 04:02:45 -- accel/accel.sh@20 -- # IFS=: 00:06:43.513 04:02:45 -- accel/accel.sh@20 -- # read -r var val 00:06:43.513 04:02:45 -- accel/accel.sh@21 -- # val= 00:06:43.513 04:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.513 04:02:45 -- accel/accel.sh@20 -- # IFS=: 00:06:43.513 04:02:45 -- accel/accel.sh@20 -- # read -r var val 00:06:43.513 04:02:45 -- accel/accel.sh@21 -- # val= 00:06:43.513 04:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.513 04:02:45 -- accel/accel.sh@20 -- # IFS=: 00:06:43.513 04:02:45 -- accel/accel.sh@20 -- # read -r var val 00:06:43.513 04:02:45 -- accel/accel.sh@21 -- # val=copy 00:06:43.513 04:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.513 04:02:45 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:43.513 04:02:45 -- accel/accel.sh@20 -- # IFS=: 00:06:43.513 04:02:45 -- accel/accel.sh@20 -- # read -r var val 00:06:43.513 04:02:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:43.513 04:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.513 04:02:45 -- accel/accel.sh@20 -- # IFS=: 00:06:43.513 04:02:45 -- accel/accel.sh@20 -- # read -r var val 00:06:43.514 04:02:45 -- accel/accel.sh@21 -- # val= 00:06:43.514 04:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.514 04:02:45 -- accel/accel.sh@20 -- # IFS=: 00:06:43.514 04:02:45 -- accel/accel.sh@20 -- # read -r var val 00:06:43.514 04:02:45 -- accel/accel.sh@21 -- # val=software 00:06:43.514 04:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.514 04:02:45 -- accel/accel.sh@23 -- # accel_module=software 00:06:43.514 04:02:45 -- accel/accel.sh@20 -- # IFS=: 00:06:43.514 04:02:45 -- accel/accel.sh@20 -- # read -r var val 00:06:43.514 04:02:45 -- accel/accel.sh@21 -- # val=32 00:06:43.514 04:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.514 04:02:45 -- accel/accel.sh@20 -- # IFS=: 00:06:43.514 04:02:45 -- accel/accel.sh@20 -- # read -r var val 00:06:43.514 04:02:45 -- accel/accel.sh@21 -- # val=32 00:06:43.514 04:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.514 04:02:45 -- accel/accel.sh@20 -- # IFS=: 00:06:43.514 04:02:45 -- accel/accel.sh@20 -- # read -r var val 00:06:43.514 04:02:45 -- accel/accel.sh@21 -- # val=1 00:06:43.514 04:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.514 04:02:45 -- accel/accel.sh@20 -- # IFS=: 00:06:43.514 04:02:45 -- accel/accel.sh@20 -- # read -r var val 00:06:43.514 04:02:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:43.514 04:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.514 04:02:45 -- accel/accel.sh@20 -- # IFS=: 00:06:43.514 04:02:45 -- accel/accel.sh@20 -- # read -r var val 00:06:43.514 04:02:45 -- accel/accel.sh@21 -- # val=Yes 00:06:43.514 04:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.514 04:02:45 -- accel/accel.sh@20 -- # IFS=: 00:06:43.514 04:02:45 -- accel/accel.sh@20 -- # read -r var val 00:06:43.514 04:02:45 -- accel/accel.sh@21 -- # val= 00:06:43.514 04:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.514 04:02:45 -- accel/accel.sh@20 -- # IFS=: 00:06:43.514 04:02:45 -- accel/accel.sh@20 -- # read -r var val 00:06:43.514 04:02:45 -- accel/accel.sh@21 -- # val= 00:06:43.514 04:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.514 04:02:45 -- accel/accel.sh@20 -- # IFS=: 00:06:43.514 04:02:45 -- accel/accel.sh@20 -- # read -r var val 00:06:44.892 04:02:46 -- accel/accel.sh@21 -- # val= 00:06:44.892 04:02:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.892 04:02:46 -- accel/accel.sh@20 -- # IFS=: 00:06:44.892 04:02:46 -- accel/accel.sh@20 -- # read -r var val 00:06:44.892 04:02:46 -- accel/accel.sh@21 -- # val= 00:06:44.892 04:02:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.892 04:02:46 -- accel/accel.sh@20 -- # IFS=: 00:06:44.892 04:02:46 -- accel/accel.sh@20 -- # read -r var val 00:06:44.892 04:02:46 -- accel/accel.sh@21 -- # val= 00:06:44.892 04:02:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.892 04:02:46 -- accel/accel.sh@20 -- # IFS=: 00:06:44.892 04:02:46 -- accel/accel.sh@20 -- # read -r var val 00:06:44.892 04:02:46 -- accel/accel.sh@21 -- # val= 00:06:44.892 04:02:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.892 04:02:46 -- accel/accel.sh@20 -- # IFS=: 00:06:44.892 04:02:46 -- accel/accel.sh@20 -- # read -r var val 00:06:44.892 04:02:46 -- accel/accel.sh@21 -- # val= 00:06:44.892 04:02:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.892 04:02:46 -- accel/accel.sh@20 -- # IFS=: 00:06:44.892 04:02:46 -- accel/accel.sh@20 -- # read -r var val 00:06:44.892 04:02:46 -- accel/accel.sh@21 -- # val= 00:06:44.892 04:02:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.892 04:02:46 -- accel/accel.sh@20 -- # IFS=: 00:06:44.892 04:02:46 -- accel/accel.sh@20 -- # read -r var val 00:06:44.892 04:02:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:44.892 04:02:46 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:44.892 04:02:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.892 00:06:44.892 real 0m2.949s 00:06:44.892 user 0m2.479s 00:06:44.892 sys 0m0.266s 00:06:44.892 04:02:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.892 ************************************ 00:06:44.892 END TEST accel_copy 00:06:44.892 ************************************ 00:06:44.892 04:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:44.892 04:02:46 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.892 04:02:46 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:44.892 04:02:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.892 04:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:44.892 ************************************ 00:06:44.892 START TEST accel_fill 00:06:44.892 ************************************ 00:06:44.892 04:02:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.892 04:02:46 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.892 04:02:46 -- accel/accel.sh@17 -- # local accel_module 00:06:44.892 04:02:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.892 04:02:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.892 04:02:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.892 04:02:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.892 04:02:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.892 04:02:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.892 04:02:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.892 04:02:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.892 04:02:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.892 04:02:46 -- accel/accel.sh@42 -- # jq -r . 00:06:44.892 [2024-11-26 04:02:46.321745] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.892 [2024-11-26 04:02:46.321848] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70648 ] 00:06:44.892 [2024-11-26 04:02:46.458089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.892 [2024-11-26 04:02:46.535419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.271 04:02:47 -- accel/accel.sh@18 -- # out=' 00:06:46.271 SPDK Configuration: 00:06:46.271 Core mask: 0x1 00:06:46.271 00:06:46.271 Accel Perf Configuration: 00:06:46.271 Workload Type: fill 00:06:46.271 Fill pattern: 0x80 00:06:46.271 Transfer size: 4096 bytes 00:06:46.271 Vector count 1 00:06:46.271 Module: software 00:06:46.271 Queue depth: 64 00:06:46.271 Allocate depth: 64 00:06:46.271 # threads/core: 1 00:06:46.271 Run time: 1 seconds 00:06:46.271 Verify: Yes 00:06:46.271 00:06:46.271 Running for 1 seconds... 00:06:46.271 00:06:46.271 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:46.271 ------------------------------------------------------------------------------------ 00:06:46.271 0,0 575104/s 2246 MiB/s 0 0 00:06:46.271 ==================================================================================== 00:06:46.271 Total 575104/s 2246 MiB/s 0 0' 00:06:46.271 04:02:47 -- accel/accel.sh@20 -- # IFS=: 00:06:46.271 04:02:47 -- accel/accel.sh@20 -- # read -r var val 00:06:46.271 04:02:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:46.271 04:02:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:46.271 04:02:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.271 04:02:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.271 04:02:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.271 04:02:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.271 04:02:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.271 04:02:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.271 04:02:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.271 04:02:47 -- accel/accel.sh@42 -- # jq -r . 00:06:46.271 [2024-11-26 04:02:47.811300] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.271 [2024-11-26 04:02:47.811395] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70668 ] 00:06:46.271 [2024-11-26 04:02:47.946838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.271 [2024-11-26 04:02:48.015557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.531 04:02:48 -- accel/accel.sh@21 -- # val= 00:06:46.531 04:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.531 04:02:48 -- accel/accel.sh@21 -- # val= 00:06:46.531 04:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.531 04:02:48 -- accel/accel.sh@21 -- # val=0x1 00:06:46.531 04:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.531 04:02:48 -- accel/accel.sh@21 -- # val= 00:06:46.531 04:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.531 04:02:48 -- accel/accel.sh@21 -- # val= 00:06:46.531 04:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.531 04:02:48 -- accel/accel.sh@21 -- # val=fill 00:06:46.531 04:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.531 04:02:48 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.531 04:02:48 -- accel/accel.sh@21 -- # val=0x80 00:06:46.531 04:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.531 04:02:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:46.531 04:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.531 04:02:48 -- accel/accel.sh@21 -- # val= 00:06:46.531 04:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.531 04:02:48 -- accel/accel.sh@21 -- # val=software 00:06:46.531 04:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.531 04:02:48 -- accel/accel.sh@23 -- # accel_module=software 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.531 04:02:48 -- accel/accel.sh@21 -- # val=64 00:06:46.531 04:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.531 04:02:48 -- accel/accel.sh@21 -- # val=64 00:06:46.531 04:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.531 04:02:48 -- accel/accel.sh@21 -- # val=1 00:06:46.531 04:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.531 04:02:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:46.531 04:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.531 04:02:48 -- accel/accel.sh@21 -- # val=Yes 00:06:46.531 04:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.531 04:02:48 -- accel/accel.sh@21 -- # val= 00:06:46.531 04:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.531 04:02:48 -- accel/accel.sh@21 -- # val= 00:06:46.531 04:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.531 04:02:48 -- accel/accel.sh@20 -- # read -r var val 00:06:47.909 04:02:49 -- accel/accel.sh@21 -- # val= 00:06:47.909 04:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.909 04:02:49 -- accel/accel.sh@20 -- # IFS=: 00:06:47.909 04:02:49 -- accel/accel.sh@20 -- # read -r var val 00:06:47.909 04:02:49 -- accel/accel.sh@21 -- # val= 00:06:47.909 04:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.909 04:02:49 -- accel/accel.sh@20 -- # IFS=: 00:06:47.909 04:02:49 -- accel/accel.sh@20 -- # read -r var val 00:06:47.909 04:02:49 -- accel/accel.sh@21 -- # val= 00:06:47.909 04:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.909 04:02:49 -- accel/accel.sh@20 -- # IFS=: 00:06:47.909 04:02:49 -- accel/accel.sh@20 -- # read -r var val 00:06:47.909 04:02:49 -- accel/accel.sh@21 -- # val= 00:06:47.909 04:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.909 04:02:49 -- accel/accel.sh@20 -- # IFS=: 00:06:47.909 04:02:49 -- accel/accel.sh@20 -- # read -r var val 00:06:47.909 04:02:49 -- accel/accel.sh@21 -- # val= 00:06:47.909 04:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.909 04:02:49 -- accel/accel.sh@20 -- # IFS=: 00:06:47.909 04:02:49 -- accel/accel.sh@20 -- # read -r var val 00:06:47.909 ************************************ 00:06:47.909 END TEST accel_fill 00:06:47.909 ************************************ 00:06:47.909 04:02:49 -- accel/accel.sh@21 -- # val= 00:06:47.909 04:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.909 04:02:49 -- accel/accel.sh@20 -- # IFS=: 00:06:47.909 04:02:49 -- accel/accel.sh@20 -- # read -r var val 00:06:47.909 04:02:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:47.909 04:02:49 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:47.909 04:02:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.909 00:06:47.909 real 0m2.978s 00:06:47.909 user 0m2.489s 00:06:47.909 sys 0m0.284s 00:06:47.909 04:02:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.910 04:02:49 -- common/autotest_common.sh@10 -- # set +x 00:06:47.910 04:02:49 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:47.910 04:02:49 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:47.910 04:02:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.910 04:02:49 -- common/autotest_common.sh@10 -- # set +x 00:06:47.910 ************************************ 00:06:47.910 START TEST accel_copy_crc32c 00:06:47.910 ************************************ 00:06:47.910 04:02:49 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:47.910 04:02:49 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.910 04:02:49 -- accel/accel.sh@17 -- # local accel_module 00:06:47.910 04:02:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:47.910 04:02:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:47.910 04:02:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.910 04:02:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.910 04:02:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.910 04:02:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.910 04:02:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.910 04:02:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.910 04:02:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.910 04:02:49 -- accel/accel.sh@42 -- # jq -r . 00:06:47.910 [2024-11-26 04:02:49.354832] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:47.910 [2024-11-26 04:02:49.354912] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70702 ] 00:06:47.910 [2024-11-26 04:02:49.487674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.910 [2024-11-26 04:02:49.558257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.287 04:02:50 -- accel/accel.sh@18 -- # out=' 00:06:49.287 SPDK Configuration: 00:06:49.287 Core mask: 0x1 00:06:49.287 00:06:49.287 Accel Perf Configuration: 00:06:49.287 Workload Type: copy_crc32c 00:06:49.287 CRC-32C seed: 0 00:06:49.287 Vector size: 4096 bytes 00:06:49.287 Transfer size: 4096 bytes 00:06:49.287 Vector count 1 00:06:49.287 Module: software 00:06:49.287 Queue depth: 32 00:06:49.287 Allocate depth: 32 00:06:49.287 # threads/core: 1 00:06:49.287 Run time: 1 seconds 00:06:49.287 Verify: Yes 00:06:49.287 00:06:49.287 Running for 1 seconds... 00:06:49.287 00:06:49.287 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:49.287 ------------------------------------------------------------------------------------ 00:06:49.287 0,0 309536/s 1209 MiB/s 0 0 00:06:49.287 ==================================================================================== 00:06:49.287 Total 309536/s 1209 MiB/s 0 0' 00:06:49.287 04:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:49.287 04:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:49.287 04:02:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:49.287 04:02:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:49.287 04:02:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.287 04:02:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.287 04:02:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.287 04:02:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.287 04:02:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.287 04:02:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.287 04:02:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.287 04:02:50 -- accel/accel.sh@42 -- # jq -r . 00:06:49.287 [2024-11-26 04:02:50.834409] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.287 [2024-11-26 04:02:50.834498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70724 ] 00:06:49.287 [2024-11-26 04:02:50.970230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.287 [2024-11-26 04:02:51.037755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.546 04:02:51 -- accel/accel.sh@21 -- # val= 00:06:49.546 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.546 04:02:51 -- accel/accel.sh@21 -- # val= 00:06:49.546 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.546 04:02:51 -- accel/accel.sh@21 -- # val=0x1 00:06:49.546 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.546 04:02:51 -- accel/accel.sh@21 -- # val= 00:06:49.546 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.546 04:02:51 -- accel/accel.sh@21 -- # val= 00:06:49.546 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.546 04:02:51 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:49.546 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.546 04:02:51 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.546 04:02:51 -- accel/accel.sh@21 -- # val=0 00:06:49.546 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.546 04:02:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:49.546 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.546 04:02:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:49.546 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.546 04:02:51 -- accel/accel.sh@21 -- # val= 00:06:49.546 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.546 04:02:51 -- accel/accel.sh@21 -- # val=software 00:06:49.546 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.546 04:02:51 -- accel/accel.sh@23 -- # accel_module=software 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.546 04:02:51 -- accel/accel.sh@21 -- # val=32 00:06:49.546 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.546 04:02:51 -- accel/accel.sh@21 -- # val=32 00:06:49.546 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.546 04:02:51 -- accel/accel.sh@21 -- # val=1 00:06:49.546 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.546 04:02:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:49.546 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.546 04:02:51 -- accel/accel.sh@21 -- # val=Yes 00:06:49.546 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.546 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.547 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.547 04:02:51 -- accel/accel.sh@21 -- # val= 00:06:49.547 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.547 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.547 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.547 04:02:51 -- accel/accel.sh@21 -- # val= 00:06:49.547 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.547 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.547 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:50.934 04:02:52 -- accel/accel.sh@21 -- # val= 00:06:50.934 04:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.934 04:02:52 -- accel/accel.sh@20 -- # IFS=: 00:06:50.934 04:02:52 -- accel/accel.sh@20 -- # read -r var val 00:06:50.934 04:02:52 -- accel/accel.sh@21 -- # val= 00:06:50.934 04:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.934 04:02:52 -- accel/accel.sh@20 -- # IFS=: 00:06:50.934 04:02:52 -- accel/accel.sh@20 -- # read -r var val 00:06:50.934 04:02:52 -- accel/accel.sh@21 -- # val= 00:06:50.934 04:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.934 04:02:52 -- accel/accel.sh@20 -- # IFS=: 00:06:50.934 04:02:52 -- accel/accel.sh@20 -- # read -r var val 00:06:50.934 04:02:52 -- accel/accel.sh@21 -- # val= 00:06:50.934 04:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.934 04:02:52 -- accel/accel.sh@20 -- # IFS=: 00:06:50.934 04:02:52 -- accel/accel.sh@20 -- # read -r var val 00:06:50.934 04:02:52 -- accel/accel.sh@21 -- # val= 00:06:50.934 04:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.934 04:02:52 -- accel/accel.sh@20 -- # IFS=: 00:06:50.934 04:02:52 -- accel/accel.sh@20 -- # read -r var val 00:06:50.934 04:02:52 -- accel/accel.sh@21 -- # val= 00:06:50.934 04:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.934 04:02:52 -- accel/accel.sh@20 -- # IFS=: 00:06:50.934 04:02:52 -- accel/accel.sh@20 -- # read -r var val 00:06:50.934 ************************************ 00:06:50.934 END TEST accel_copy_crc32c 00:06:50.934 ************************************ 00:06:50.934 04:02:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:50.934 04:02:52 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:50.935 04:02:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.935 00:06:50.935 real 0m2.965s 00:06:50.935 user 0m2.495s 00:06:50.935 sys 0m0.266s 00:06:50.935 04:02:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.935 04:02:52 -- common/autotest_common.sh@10 -- # set +x 00:06:50.935 04:02:52 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:50.935 04:02:52 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:50.935 04:02:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.935 04:02:52 -- common/autotest_common.sh@10 -- # set +x 00:06:50.935 ************************************ 00:06:50.935 START TEST accel_copy_crc32c_C2 00:06:50.935 ************************************ 00:06:50.935 04:02:52 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:50.935 04:02:52 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.935 04:02:52 -- accel/accel.sh@17 -- # local accel_module 00:06:50.935 04:02:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:50.935 04:02:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:50.935 04:02:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.935 04:02:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.935 04:02:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.935 04:02:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.935 04:02:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.935 04:02:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.935 04:02:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.935 04:02:52 -- accel/accel.sh@42 -- # jq -r . 00:06:50.935 [2024-11-26 04:02:52.375465] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.935 [2024-11-26 04:02:52.375812] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70758 ] 00:06:50.935 [2024-11-26 04:02:52.508801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.935 [2024-11-26 04:02:52.576503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.334 04:02:53 -- accel/accel.sh@18 -- # out=' 00:06:52.334 SPDK Configuration: 00:06:52.334 Core mask: 0x1 00:06:52.334 00:06:52.334 Accel Perf Configuration: 00:06:52.334 Workload Type: copy_crc32c 00:06:52.334 CRC-32C seed: 0 00:06:52.334 Vector size: 4096 bytes 00:06:52.334 Transfer size: 8192 bytes 00:06:52.334 Vector count 2 00:06:52.334 Module: software 00:06:52.334 Queue depth: 32 00:06:52.334 Allocate depth: 32 00:06:52.334 # threads/core: 1 00:06:52.334 Run time: 1 seconds 00:06:52.334 Verify: Yes 00:06:52.334 00:06:52.334 Running for 1 seconds... 00:06:52.334 00:06:52.334 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:52.334 ------------------------------------------------------------------------------------ 00:06:52.334 0,0 219936/s 1718 MiB/s 0 0 00:06:52.334 ==================================================================================== 00:06:52.334 Total 219936/s 859 MiB/s 0 0' 00:06:52.334 04:02:53 -- accel/accel.sh@20 -- # IFS=: 00:06:52.334 04:02:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:52.334 04:02:53 -- accel/accel.sh@20 -- # read -r var val 00:06:52.334 04:02:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:52.334 04:02:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.334 04:02:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.334 04:02:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.334 04:02:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.334 04:02:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.334 04:02:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.334 04:02:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.334 04:02:53 -- accel/accel.sh@42 -- # jq -r . 00:06:52.334 [2024-11-26 04:02:53.851966] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.334 [2024-11-26 04:02:53.852053] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70778 ] 00:06:52.334 [2024-11-26 04:02:53.987664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.334 [2024-11-26 04:02:54.053683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.593 04:02:54 -- accel/accel.sh@21 -- # val= 00:06:52.593 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.594 04:02:54 -- accel/accel.sh@21 -- # val= 00:06:52.594 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.594 04:02:54 -- accel/accel.sh@21 -- # val=0x1 00:06:52.594 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.594 04:02:54 -- accel/accel.sh@21 -- # val= 00:06:52.594 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.594 04:02:54 -- accel/accel.sh@21 -- # val= 00:06:52.594 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.594 04:02:54 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:52.594 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.594 04:02:54 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.594 04:02:54 -- accel/accel.sh@21 -- # val=0 00:06:52.594 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.594 04:02:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:52.594 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.594 04:02:54 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:52.594 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.594 04:02:54 -- accel/accel.sh@21 -- # val= 00:06:52.594 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.594 04:02:54 -- accel/accel.sh@21 -- # val=software 00:06:52.594 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.594 04:02:54 -- accel/accel.sh@23 -- # accel_module=software 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.594 04:02:54 -- accel/accel.sh@21 -- # val=32 00:06:52.594 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.594 04:02:54 -- accel/accel.sh@21 -- # val=32 00:06:52.594 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.594 04:02:54 -- accel/accel.sh@21 -- # val=1 00:06:52.594 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.594 04:02:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:52.594 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.594 04:02:54 -- accel/accel.sh@21 -- # val=Yes 00:06:52.594 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.594 04:02:54 -- accel/accel.sh@21 -- # val= 00:06:52.594 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.594 04:02:54 -- accel/accel.sh@21 -- # val= 00:06:52.594 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.594 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:53.972 04:02:55 -- accel/accel.sh@21 -- # val= 00:06:53.972 04:02:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.972 04:02:55 -- accel/accel.sh@20 -- # IFS=: 00:06:53.972 04:02:55 -- accel/accel.sh@20 -- # read -r var val 00:06:53.972 04:02:55 -- accel/accel.sh@21 -- # val= 00:06:53.972 04:02:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.972 04:02:55 -- accel/accel.sh@20 -- # IFS=: 00:06:53.972 04:02:55 -- accel/accel.sh@20 -- # read -r var val 00:06:53.972 04:02:55 -- accel/accel.sh@21 -- # val= 00:06:53.972 04:02:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.972 04:02:55 -- accel/accel.sh@20 -- # IFS=: 00:06:53.972 04:02:55 -- accel/accel.sh@20 -- # read -r var val 00:06:53.972 04:02:55 -- accel/accel.sh@21 -- # val= 00:06:53.972 04:02:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.972 04:02:55 -- accel/accel.sh@20 -- # IFS=: 00:06:53.972 ************************************ 00:06:53.972 END TEST accel_copy_crc32c_C2 00:06:53.972 ************************************ 00:06:53.972 04:02:55 -- accel/accel.sh@20 -- # read -r var val 00:06:53.972 04:02:55 -- accel/accel.sh@21 -- # val= 00:06:53.972 04:02:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.972 04:02:55 -- accel/accel.sh@20 -- # IFS=: 00:06:53.972 04:02:55 -- accel/accel.sh@20 -- # read -r var val 00:06:53.972 04:02:55 -- accel/accel.sh@21 -- # val= 00:06:53.972 04:02:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.972 04:02:55 -- accel/accel.sh@20 -- # IFS=: 00:06:53.972 04:02:55 -- accel/accel.sh@20 -- # read -r var val 00:06:53.972 04:02:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:53.972 04:02:55 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:53.972 04:02:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.972 00:06:53.972 real 0m2.993s 00:06:53.972 user 0m2.526s 00:06:53.972 sys 0m0.260s 00:06:53.972 04:02:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.972 04:02:55 -- common/autotest_common.sh@10 -- # set +x 00:06:53.972 04:02:55 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:53.972 04:02:55 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:53.972 04:02:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.972 04:02:55 -- common/autotest_common.sh@10 -- # set +x 00:06:53.972 ************************************ 00:06:53.972 START TEST accel_dualcast 00:06:53.972 ************************************ 00:06:53.972 04:02:55 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:53.972 04:02:55 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.972 04:02:55 -- accel/accel.sh@17 -- # local accel_module 00:06:53.972 04:02:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:53.972 04:02:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:53.972 04:02:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.972 04:02:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.972 04:02:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.972 04:02:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.972 04:02:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.972 04:02:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.972 04:02:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.972 04:02:55 -- accel/accel.sh@42 -- # jq -r . 00:06:53.972 [2024-11-26 04:02:55.421118] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.972 [2024-11-26 04:02:55.421219] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70812 ] 00:06:53.972 [2024-11-26 04:02:55.558635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.972 [2024-11-26 04:02:55.633702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.349 04:02:56 -- accel/accel.sh@18 -- # out=' 00:06:55.349 SPDK Configuration: 00:06:55.349 Core mask: 0x1 00:06:55.349 00:06:55.349 Accel Perf Configuration: 00:06:55.349 Workload Type: dualcast 00:06:55.349 Transfer size: 4096 bytes 00:06:55.349 Vector count 1 00:06:55.349 Module: software 00:06:55.349 Queue depth: 32 00:06:55.349 Allocate depth: 32 00:06:55.349 # threads/core: 1 00:06:55.349 Run time: 1 seconds 00:06:55.349 Verify: Yes 00:06:55.349 00:06:55.349 Running for 1 seconds... 00:06:55.349 00:06:55.349 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.349 ------------------------------------------------------------------------------------ 00:06:55.349 0,0 433696/s 1694 MiB/s 0 0 00:06:55.349 ==================================================================================== 00:06:55.349 Total 433696/s 1694 MiB/s 0 0' 00:06:55.349 04:02:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:55.349 04:02:56 -- accel/accel.sh@20 -- # IFS=: 00:06:55.349 04:02:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.349 04:02:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:55.349 04:02:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.349 04:02:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.349 04:02:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.349 04:02:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.349 04:02:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.349 04:02:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.349 04:02:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.349 04:02:56 -- accel/accel.sh@42 -- # jq -r . 00:06:55.349 [2024-11-26 04:02:56.904100] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.349 [2024-11-26 04:02:56.904172] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70832 ] 00:06:55.349 [2024-11-26 04:02:57.033102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.349 [2024-11-26 04:02:57.098261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.609 04:02:57 -- accel/accel.sh@21 -- # val= 00:06:55.609 04:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:55.609 04:02:57 -- accel/accel.sh@21 -- # val= 00:06:55.609 04:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:55.609 04:02:57 -- accel/accel.sh@21 -- # val=0x1 00:06:55.609 04:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:55.609 04:02:57 -- accel/accel.sh@21 -- # val= 00:06:55.609 04:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:55.609 04:02:57 -- accel/accel.sh@21 -- # val= 00:06:55.609 04:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:55.609 04:02:57 -- accel/accel.sh@21 -- # val=dualcast 00:06:55.609 04:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.609 04:02:57 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:55.609 04:02:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.609 04:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:55.609 04:02:57 -- accel/accel.sh@21 -- # val= 00:06:55.609 04:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:55.609 04:02:57 -- accel/accel.sh@21 -- # val=software 00:06:55.609 04:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.609 04:02:57 -- accel/accel.sh@23 -- # accel_module=software 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:55.609 04:02:57 -- accel/accel.sh@21 -- # val=32 00:06:55.609 04:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:55.609 04:02:57 -- accel/accel.sh@21 -- # val=32 00:06:55.609 04:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:55.609 04:02:57 -- accel/accel.sh@21 -- # val=1 00:06:55.609 04:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:55.609 04:02:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:55.609 04:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:55.609 04:02:57 -- accel/accel.sh@21 -- # val=Yes 00:06:55.609 04:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:55.609 04:02:57 -- accel/accel.sh@21 -- # val= 00:06:55.609 04:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:55.609 04:02:57 -- accel/accel.sh@21 -- # val= 00:06:55.609 04:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:55.609 04:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:56.987 04:02:58 -- accel/accel.sh@21 -- # val= 00:06:56.987 04:02:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.987 04:02:58 -- accel/accel.sh@20 -- # IFS=: 00:06:56.987 04:02:58 -- accel/accel.sh@20 -- # read -r var val 00:06:56.987 04:02:58 -- accel/accel.sh@21 -- # val= 00:06:56.987 04:02:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.987 04:02:58 -- accel/accel.sh@20 -- # IFS=: 00:06:56.987 04:02:58 -- accel/accel.sh@20 -- # read -r var val 00:06:56.987 04:02:58 -- accel/accel.sh@21 -- # val= 00:06:56.987 04:02:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.987 04:02:58 -- accel/accel.sh@20 -- # IFS=: 00:06:56.987 04:02:58 -- accel/accel.sh@20 -- # read -r var val 00:06:56.987 04:02:58 -- accel/accel.sh@21 -- # val= 00:06:56.987 04:02:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.987 04:02:58 -- accel/accel.sh@20 -- # IFS=: 00:06:56.987 04:02:58 -- accel/accel.sh@20 -- # read -r var val 00:06:56.987 04:02:58 -- accel/accel.sh@21 -- # val= 00:06:56.987 04:02:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.987 04:02:58 -- accel/accel.sh@20 -- # IFS=: 00:06:56.987 04:02:58 -- accel/accel.sh@20 -- # read -r var val 00:06:56.987 04:02:58 -- accel/accel.sh@21 -- # val= 00:06:56.987 04:02:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.987 04:02:58 -- accel/accel.sh@20 -- # IFS=: 00:06:56.987 04:02:58 -- accel/accel.sh@20 -- # read -r var val 00:06:56.987 04:02:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.987 04:02:58 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:56.987 04:02:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.987 00:06:56.987 real 0m2.964s 00:06:56.987 user 0m2.494s 00:06:56.987 sys 0m0.265s 00:06:56.987 04:02:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.987 ************************************ 00:06:56.987 END TEST accel_dualcast 00:06:56.987 ************************************ 00:06:56.987 04:02:58 -- common/autotest_common.sh@10 -- # set +x 00:06:56.987 04:02:58 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:56.987 04:02:58 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:56.987 04:02:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.987 04:02:58 -- common/autotest_common.sh@10 -- # set +x 00:06:56.987 ************************************ 00:06:56.987 START TEST accel_compare 00:06:56.987 ************************************ 00:06:56.987 04:02:58 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:56.987 04:02:58 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.987 04:02:58 -- accel/accel.sh@17 -- # local accel_module 00:06:56.987 04:02:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:56.987 04:02:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:56.987 04:02:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.987 04:02:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.987 04:02:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.987 04:02:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.987 04:02:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.987 04:02:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.987 04:02:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.987 04:02:58 -- accel/accel.sh@42 -- # jq -r . 00:06:56.987 [2024-11-26 04:02:58.432436] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.987 [2024-11-26 04:02:58.432850] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70867 ] 00:06:56.987 [2024-11-26 04:02:58.568986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.987 [2024-11-26 04:02:58.642165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.364 04:02:59 -- accel/accel.sh@18 -- # out=' 00:06:58.364 SPDK Configuration: 00:06:58.364 Core mask: 0x1 00:06:58.364 00:06:58.364 Accel Perf Configuration: 00:06:58.364 Workload Type: compare 00:06:58.364 Transfer size: 4096 bytes 00:06:58.364 Vector count 1 00:06:58.364 Module: software 00:06:58.364 Queue depth: 32 00:06:58.364 Allocate depth: 32 00:06:58.364 # threads/core: 1 00:06:58.364 Run time: 1 seconds 00:06:58.364 Verify: Yes 00:06:58.364 00:06:58.364 Running for 1 seconds... 00:06:58.364 00:06:58.364 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.364 ------------------------------------------------------------------------------------ 00:06:58.364 0,0 572896/s 2237 MiB/s 0 0 00:06:58.364 ==================================================================================== 00:06:58.364 Total 572896/s 2237 MiB/s 0 0' 00:06:58.364 04:02:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:58.364 04:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:58.364 04:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:58.364 04:02:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:58.364 04:02:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.364 04:02:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.364 04:02:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.364 04:02:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.364 04:02:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.364 04:02:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.364 04:02:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.364 04:02:59 -- accel/accel.sh@42 -- # jq -r . 00:06:58.364 [2024-11-26 04:02:59.923526] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.364 [2024-11-26 04:02:59.923622] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70887 ] 00:06:58.364 [2024-11-26 04:03:00.060971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.624 [2024-11-26 04:03:00.137498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.624 04:03:00 -- accel/accel.sh@21 -- # val= 00:06:58.624 04:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:58.624 04:03:00 -- accel/accel.sh@21 -- # val= 00:06:58.624 04:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:58.624 04:03:00 -- accel/accel.sh@21 -- # val=0x1 00:06:58.624 04:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:58.624 04:03:00 -- accel/accel.sh@21 -- # val= 00:06:58.624 04:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:58.624 04:03:00 -- accel/accel.sh@21 -- # val= 00:06:58.624 04:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:58.624 04:03:00 -- accel/accel.sh@21 -- # val=compare 00:06:58.624 04:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.624 04:03:00 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:58.624 04:03:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.624 04:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:58.624 04:03:00 -- accel/accel.sh@21 -- # val= 00:06:58.624 04:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:58.624 04:03:00 -- accel/accel.sh@21 -- # val=software 00:06:58.624 04:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.624 04:03:00 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:58.624 04:03:00 -- accel/accel.sh@21 -- # val=32 00:06:58.624 04:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:58.624 04:03:00 -- accel/accel.sh@21 -- # val=32 00:06:58.624 04:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:58.624 04:03:00 -- accel/accel.sh@21 -- # val=1 00:06:58.624 04:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:58.624 04:03:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.624 04:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:58.624 04:03:00 -- accel/accel.sh@21 -- # val=Yes 00:06:58.624 04:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:58.624 04:03:00 -- accel/accel.sh@21 -- # val= 00:06:58.624 04:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:58.624 04:03:00 -- accel/accel.sh@21 -- # val= 00:06:58.624 04:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:58.624 04:03:00 -- accel/accel.sh@20 -- # read -r var val 00:07:00.004 04:03:01 -- accel/accel.sh@21 -- # val= 00:07:00.004 04:03:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.004 04:03:01 -- accel/accel.sh@20 -- # IFS=: 00:07:00.004 04:03:01 -- accel/accel.sh@20 -- # read -r var val 00:07:00.004 04:03:01 -- accel/accel.sh@21 -- # val= 00:07:00.004 04:03:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.004 04:03:01 -- accel/accel.sh@20 -- # IFS=: 00:07:00.004 04:03:01 -- accel/accel.sh@20 -- # read -r var val 00:07:00.004 04:03:01 -- accel/accel.sh@21 -- # val= 00:07:00.004 04:03:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.004 04:03:01 -- accel/accel.sh@20 -- # IFS=: 00:07:00.004 04:03:01 -- accel/accel.sh@20 -- # read -r var val 00:07:00.004 04:03:01 -- accel/accel.sh@21 -- # val= 00:07:00.004 04:03:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.004 04:03:01 -- accel/accel.sh@20 -- # IFS=: 00:07:00.004 04:03:01 -- accel/accel.sh@20 -- # read -r var val 00:07:00.004 04:03:01 -- accel/accel.sh@21 -- # val= 00:07:00.004 04:03:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.004 04:03:01 -- accel/accel.sh@20 -- # IFS=: 00:07:00.004 04:03:01 -- accel/accel.sh@20 -- # read -r var val 00:07:00.004 04:03:01 -- accel/accel.sh@21 -- # val= 00:07:00.004 04:03:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.004 04:03:01 -- accel/accel.sh@20 -- # IFS=: 00:07:00.004 04:03:01 -- accel/accel.sh@20 -- # read -r var val 00:07:00.004 04:03:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.004 04:03:01 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:00.004 04:03:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.004 00:07:00.004 real 0m2.992s 00:07:00.005 user 0m2.507s 00:07:00.005 sys 0m0.279s 00:07:00.005 04:03:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.005 04:03:01 -- common/autotest_common.sh@10 -- # set +x 00:07:00.005 ************************************ 00:07:00.005 END TEST accel_compare 00:07:00.005 ************************************ 00:07:00.005 04:03:01 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:00.005 04:03:01 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:00.005 04:03:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.005 04:03:01 -- common/autotest_common.sh@10 -- # set +x 00:07:00.005 ************************************ 00:07:00.005 START TEST accel_xor 00:07:00.005 ************************************ 00:07:00.005 04:03:01 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:07:00.005 04:03:01 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.005 04:03:01 -- accel/accel.sh@17 -- # local accel_module 00:07:00.005 04:03:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:00.005 04:03:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:00.005 04:03:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.005 04:03:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.005 04:03:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.005 04:03:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.005 04:03:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.005 04:03:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.005 04:03:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.005 04:03:01 -- accel/accel.sh@42 -- # jq -r . 00:07:00.005 [2024-11-26 04:03:01.481725] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.005 [2024-11-26 04:03:01.481826] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70916 ] 00:07:00.005 [2024-11-26 04:03:01.619345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.005 [2024-11-26 04:03:01.701801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.384 04:03:02 -- accel/accel.sh@18 -- # out=' 00:07:01.384 SPDK Configuration: 00:07:01.384 Core mask: 0x1 00:07:01.384 00:07:01.384 Accel Perf Configuration: 00:07:01.384 Workload Type: xor 00:07:01.384 Source buffers: 2 00:07:01.384 Transfer size: 4096 bytes 00:07:01.384 Vector count 1 00:07:01.384 Module: software 00:07:01.384 Queue depth: 32 00:07:01.384 Allocate depth: 32 00:07:01.384 # threads/core: 1 00:07:01.384 Run time: 1 seconds 00:07:01.384 Verify: Yes 00:07:01.384 00:07:01.384 Running for 1 seconds... 00:07:01.384 00:07:01.384 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:01.384 ------------------------------------------------------------------------------------ 00:07:01.384 0,0 252160/s 985 MiB/s 0 0 00:07:01.384 ==================================================================================== 00:07:01.384 Total 252160/s 985 MiB/s 0 0' 00:07:01.384 04:03:02 -- accel/accel.sh@20 -- # IFS=: 00:07:01.384 04:03:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:01.384 04:03:02 -- accel/accel.sh@20 -- # read -r var val 00:07:01.384 04:03:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:01.384 04:03:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.384 04:03:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.384 04:03:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.384 04:03:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.384 04:03:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.384 04:03:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.384 04:03:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.384 04:03:02 -- accel/accel.sh@42 -- # jq -r . 00:07:01.384 [2024-11-26 04:03:02.994369] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.384 [2024-11-26 04:03:02.994454] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70941 ] 00:07:01.384 [2024-11-26 04:03:03.131227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.643 [2024-11-26 04:03:03.220307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.643 04:03:03 -- accel/accel.sh@21 -- # val= 00:07:01.643 04:03:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.643 04:03:03 -- accel/accel.sh@20 -- # IFS=: 00:07:01.643 04:03:03 -- accel/accel.sh@20 -- # read -r var val 00:07:01.643 04:03:03 -- accel/accel.sh@21 -- # val= 00:07:01.643 04:03:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.643 04:03:03 -- accel/accel.sh@20 -- # IFS=: 00:07:01.643 04:03:03 -- accel/accel.sh@20 -- # read -r var val 00:07:01.643 04:03:03 -- accel/accel.sh@21 -- # val=0x1 00:07:01.643 04:03:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.643 04:03:03 -- accel/accel.sh@20 -- # IFS=: 00:07:01.643 04:03:03 -- accel/accel.sh@20 -- # read -r var val 00:07:01.643 04:03:03 -- accel/accel.sh@21 -- # val= 00:07:01.644 04:03:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # IFS=: 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # read -r var val 00:07:01.644 04:03:03 -- accel/accel.sh@21 -- # val= 00:07:01.644 04:03:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # IFS=: 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # read -r var val 00:07:01.644 04:03:03 -- accel/accel.sh@21 -- # val=xor 00:07:01.644 04:03:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.644 04:03:03 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # IFS=: 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # read -r var val 00:07:01.644 04:03:03 -- accel/accel.sh@21 -- # val=2 00:07:01.644 04:03:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # IFS=: 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # read -r var val 00:07:01.644 04:03:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:01.644 04:03:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # IFS=: 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # read -r var val 00:07:01.644 04:03:03 -- accel/accel.sh@21 -- # val= 00:07:01.644 04:03:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # IFS=: 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # read -r var val 00:07:01.644 04:03:03 -- accel/accel.sh@21 -- # val=software 00:07:01.644 04:03:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.644 04:03:03 -- accel/accel.sh@23 -- # accel_module=software 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # IFS=: 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # read -r var val 00:07:01.644 04:03:03 -- accel/accel.sh@21 -- # val=32 00:07:01.644 04:03:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # IFS=: 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # read -r var val 00:07:01.644 04:03:03 -- accel/accel.sh@21 -- # val=32 00:07:01.644 04:03:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # IFS=: 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # read -r var val 00:07:01.644 04:03:03 -- accel/accel.sh@21 -- # val=1 00:07:01.644 04:03:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # IFS=: 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # read -r var val 00:07:01.644 04:03:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:01.644 04:03:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # IFS=: 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # read -r var val 00:07:01.644 04:03:03 -- accel/accel.sh@21 -- # val=Yes 00:07:01.644 04:03:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # IFS=: 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # read -r var val 00:07:01.644 04:03:03 -- accel/accel.sh@21 -- # val= 00:07:01.644 04:03:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # IFS=: 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # read -r var val 00:07:01.644 04:03:03 -- accel/accel.sh@21 -- # val= 00:07:01.644 04:03:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # IFS=: 00:07:01.644 04:03:03 -- accel/accel.sh@20 -- # read -r var val 00:07:03.023 04:03:04 -- accel/accel.sh@21 -- # val= 00:07:03.023 04:03:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.023 04:03:04 -- accel/accel.sh@20 -- # IFS=: 00:07:03.023 04:03:04 -- accel/accel.sh@20 -- # read -r var val 00:07:03.023 04:03:04 -- accel/accel.sh@21 -- # val= 00:07:03.023 04:03:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.023 04:03:04 -- accel/accel.sh@20 -- # IFS=: 00:07:03.023 04:03:04 -- accel/accel.sh@20 -- # read -r var val 00:07:03.023 04:03:04 -- accel/accel.sh@21 -- # val= 00:07:03.023 04:03:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.023 04:03:04 -- accel/accel.sh@20 -- # IFS=: 00:07:03.023 04:03:04 -- accel/accel.sh@20 -- # read -r var val 00:07:03.023 04:03:04 -- accel/accel.sh@21 -- # val= 00:07:03.023 04:03:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.023 04:03:04 -- accel/accel.sh@20 -- # IFS=: 00:07:03.023 04:03:04 -- accel/accel.sh@20 -- # read -r var val 00:07:03.023 04:03:04 -- accel/accel.sh@21 -- # val= 00:07:03.023 04:03:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.023 04:03:04 -- accel/accel.sh@20 -- # IFS=: 00:07:03.023 04:03:04 -- accel/accel.sh@20 -- # read -r var val 00:07:03.023 04:03:04 -- accel/accel.sh@21 -- # val= 00:07:03.023 04:03:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.023 04:03:04 -- accel/accel.sh@20 -- # IFS=: 00:07:03.023 04:03:04 -- accel/accel.sh@20 -- # read -r var val 00:07:03.023 04:03:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.023 04:03:04 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:03.023 04:03:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.023 00:07:03.023 real 0m3.024s 00:07:03.023 user 0m2.536s 00:07:03.023 sys 0m0.284s 00:07:03.023 04:03:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.023 04:03:04 -- common/autotest_common.sh@10 -- # set +x 00:07:03.023 ************************************ 00:07:03.023 END TEST accel_xor 00:07:03.023 ************************************ 00:07:03.023 04:03:04 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:03.023 04:03:04 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:03.023 04:03:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.023 04:03:04 -- common/autotest_common.sh@10 -- # set +x 00:07:03.023 ************************************ 00:07:03.023 START TEST accel_xor 00:07:03.023 ************************************ 00:07:03.023 04:03:04 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:03.023 04:03:04 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.023 04:03:04 -- accel/accel.sh@17 -- # local accel_module 00:07:03.023 04:03:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:03.023 04:03:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:03.023 04:03:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.023 04:03:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.023 04:03:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.023 04:03:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.023 04:03:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.023 04:03:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.023 04:03:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.023 04:03:04 -- accel/accel.sh@42 -- # jq -r . 00:07:03.023 [2024-11-26 04:03:04.559872] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.023 [2024-11-26 04:03:04.559975] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70970 ] 00:07:03.023 [2024-11-26 04:03:04.698209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.023 [2024-11-26 04:03:04.770284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.403 04:03:06 -- accel/accel.sh@18 -- # out=' 00:07:04.403 SPDK Configuration: 00:07:04.403 Core mask: 0x1 00:07:04.403 00:07:04.403 Accel Perf Configuration: 00:07:04.403 Workload Type: xor 00:07:04.403 Source buffers: 3 00:07:04.403 Transfer size: 4096 bytes 00:07:04.403 Vector count 1 00:07:04.403 Module: software 00:07:04.403 Queue depth: 32 00:07:04.403 Allocate depth: 32 00:07:04.403 # threads/core: 1 00:07:04.403 Run time: 1 seconds 00:07:04.403 Verify: Yes 00:07:04.403 00:07:04.403 Running for 1 seconds... 00:07:04.403 00:07:04.403 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.403 ------------------------------------------------------------------------------------ 00:07:04.403 0,0 246592/s 963 MiB/s 0 0 00:07:04.403 ==================================================================================== 00:07:04.403 Total 246592/s 963 MiB/s 0 0' 00:07:04.403 04:03:06 -- accel/accel.sh@20 -- # IFS=: 00:07:04.403 04:03:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:04.403 04:03:06 -- accel/accel.sh@20 -- # read -r var val 00:07:04.403 04:03:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:04.403 04:03:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.403 04:03:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.403 04:03:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.403 04:03:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.403 04:03:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.403 04:03:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.403 04:03:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.403 04:03:06 -- accel/accel.sh@42 -- # jq -r . 00:07:04.403 [2024-11-26 04:03:06.049045] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.403 [2024-11-26 04:03:06.049640] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70995 ] 00:07:04.663 [2024-11-26 04:03:06.187252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.663 [2024-11-26 04:03:06.254954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.663 04:03:06 -- accel/accel.sh@21 -- # val= 00:07:04.663 04:03:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.663 04:03:06 -- accel/accel.sh@20 -- # IFS=: 00:07:04.663 04:03:06 -- accel/accel.sh@20 -- # read -r var val 00:07:04.663 04:03:06 -- accel/accel.sh@21 -- # val= 00:07:04.663 04:03:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.663 04:03:06 -- accel/accel.sh@20 -- # IFS=: 00:07:04.663 04:03:06 -- accel/accel.sh@20 -- # read -r var val 00:07:04.663 04:03:06 -- accel/accel.sh@21 -- # val=0x1 00:07:04.663 04:03:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.663 04:03:06 -- accel/accel.sh@20 -- # IFS=: 00:07:04.663 04:03:06 -- accel/accel.sh@20 -- # read -r var val 00:07:04.663 04:03:06 -- accel/accel.sh@21 -- # val= 00:07:04.663 04:03:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.663 04:03:06 -- accel/accel.sh@20 -- # IFS=: 00:07:04.663 04:03:06 -- accel/accel.sh@20 -- # read -r var val 00:07:04.663 04:03:06 -- accel/accel.sh@21 -- # val= 00:07:04.663 04:03:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.663 04:03:06 -- accel/accel.sh@20 -- # IFS=: 00:07:04.663 04:03:06 -- accel/accel.sh@20 -- # read -r var val 00:07:04.663 04:03:06 -- accel/accel.sh@21 -- # val=xor 00:07:04.663 04:03:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.663 04:03:06 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:04.663 04:03:06 -- accel/accel.sh@20 -- # IFS=: 00:07:04.663 04:03:06 -- accel/accel.sh@20 -- # read -r var val 00:07:04.663 04:03:06 -- accel/accel.sh@21 -- # val=3 00:07:04.663 04:03:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.663 04:03:06 -- accel/accel.sh@20 -- # IFS=: 00:07:04.663 04:03:06 -- accel/accel.sh@20 -- # read -r var val 00:07:04.663 04:03:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.663 04:03:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.663 04:03:06 -- accel/accel.sh@20 -- # IFS=: 00:07:04.663 04:03:06 -- accel/accel.sh@20 -- # read -r var val 00:07:04.663 04:03:06 -- accel/accel.sh@21 -- # val= 00:07:04.663 04:03:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.663 04:03:06 -- accel/accel.sh@20 -- # IFS=: 00:07:04.663 04:03:06 -- accel/accel.sh@20 -- # read -r var val 00:07:04.663 04:03:06 -- accel/accel.sh@21 -- # val=software 00:07:04.663 04:03:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.663 04:03:06 -- accel/accel.sh@23 -- # accel_module=software 00:07:04.663 04:03:06 -- accel/accel.sh@20 -- # IFS=: 00:07:04.663 04:03:06 -- accel/accel.sh@20 -- # read -r var val 00:07:04.663 04:03:06 -- accel/accel.sh@21 -- # val=32 00:07:04.663 04:03:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.663 04:03:06 -- accel/accel.sh@20 -- # IFS=: 00:07:04.663 04:03:06 -- accel/accel.sh@20 -- # read -r var val 00:07:04.663 04:03:06 -- accel/accel.sh@21 -- # val=32 00:07:04.663 04:03:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.663 04:03:06 -- accel/accel.sh@20 -- # IFS=: 00:07:04.663 04:03:06 -- accel/accel.sh@20 -- # read -r var val 00:07:04.663 04:03:06 -- accel/accel.sh@21 -- # val=1 00:07:04.663 04:03:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.664 04:03:06 -- accel/accel.sh@20 -- # IFS=: 00:07:04.664 04:03:06 -- accel/accel.sh@20 -- # read -r var val 00:07:04.664 04:03:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:04.664 04:03:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.664 04:03:06 -- accel/accel.sh@20 -- # IFS=: 00:07:04.664 04:03:06 -- accel/accel.sh@20 -- # read -r var val 00:07:04.664 04:03:06 -- accel/accel.sh@21 -- # val=Yes 00:07:04.664 04:03:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.664 04:03:06 -- accel/accel.sh@20 -- # IFS=: 00:07:04.664 04:03:06 -- accel/accel.sh@20 -- # read -r var val 00:07:04.664 04:03:06 -- accel/accel.sh@21 -- # val= 00:07:04.664 04:03:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.664 04:03:06 -- accel/accel.sh@20 -- # IFS=: 00:07:04.664 04:03:06 -- accel/accel.sh@20 -- # read -r var val 00:07:04.664 04:03:06 -- accel/accel.sh@21 -- # val= 00:07:04.664 04:03:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.664 04:03:06 -- accel/accel.sh@20 -- # IFS=: 00:07:04.664 04:03:06 -- accel/accel.sh@20 -- # read -r var val 00:07:06.041 04:03:07 -- accel/accel.sh@21 -- # val= 00:07:06.041 04:03:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.041 04:03:07 -- accel/accel.sh@20 -- # IFS=: 00:07:06.041 04:03:07 -- accel/accel.sh@20 -- # read -r var val 00:07:06.041 04:03:07 -- accel/accel.sh@21 -- # val= 00:07:06.041 04:03:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.041 04:03:07 -- accel/accel.sh@20 -- # IFS=: 00:07:06.041 04:03:07 -- accel/accel.sh@20 -- # read -r var val 00:07:06.041 04:03:07 -- accel/accel.sh@21 -- # val= 00:07:06.041 04:03:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.041 04:03:07 -- accel/accel.sh@20 -- # IFS=: 00:07:06.041 04:03:07 -- accel/accel.sh@20 -- # read -r var val 00:07:06.041 04:03:07 -- accel/accel.sh@21 -- # val= 00:07:06.041 04:03:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.041 04:03:07 -- accel/accel.sh@20 -- # IFS=: 00:07:06.041 04:03:07 -- accel/accel.sh@20 -- # read -r var val 00:07:06.041 04:03:07 -- accel/accel.sh@21 -- # val= 00:07:06.041 04:03:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.041 04:03:07 -- accel/accel.sh@20 -- # IFS=: 00:07:06.041 04:03:07 -- accel/accel.sh@20 -- # read -r var val 00:07:06.041 04:03:07 -- accel/accel.sh@21 -- # val= 00:07:06.041 04:03:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.041 04:03:07 -- accel/accel.sh@20 -- # IFS=: 00:07:06.041 04:03:07 -- accel/accel.sh@20 -- # read -r var val 00:07:06.041 04:03:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.041 04:03:07 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:06.041 04:03:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.041 00:07:06.041 real 0m2.946s 00:07:06.041 user 0m2.464s 00:07:06.041 sys 0m0.277s 00:07:06.041 04:03:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.041 ************************************ 00:07:06.041 END TEST accel_xor 00:07:06.041 ************************************ 00:07:06.041 04:03:07 -- common/autotest_common.sh@10 -- # set +x 00:07:06.041 04:03:07 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:06.041 04:03:07 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:06.041 04:03:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.041 04:03:07 -- common/autotest_common.sh@10 -- # set +x 00:07:06.041 ************************************ 00:07:06.041 START TEST accel_dif_verify 00:07:06.041 ************************************ 00:07:06.041 04:03:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:06.041 04:03:07 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.041 04:03:07 -- accel/accel.sh@17 -- # local accel_module 00:07:06.041 04:03:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:06.041 04:03:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:06.041 04:03:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.041 04:03:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.041 04:03:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.041 04:03:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.041 04:03:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.041 04:03:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.041 04:03:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.041 04:03:07 -- accel/accel.sh@42 -- # jq -r . 00:07:06.041 [2024-11-26 04:03:07.559030] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.041 [2024-11-26 04:03:07.559312] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71024 ] 00:07:06.041 [2024-11-26 04:03:07.698275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.041 [2024-11-26 04:03:07.762301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.419 04:03:08 -- accel/accel.sh@18 -- # out=' 00:07:07.419 SPDK Configuration: 00:07:07.419 Core mask: 0x1 00:07:07.419 00:07:07.419 Accel Perf Configuration: 00:07:07.419 Workload Type: dif_verify 00:07:07.419 Vector size: 4096 bytes 00:07:07.419 Transfer size: 4096 bytes 00:07:07.419 Block size: 512 bytes 00:07:07.419 Metadata size: 8 bytes 00:07:07.419 Vector count 1 00:07:07.419 Module: software 00:07:07.419 Queue depth: 32 00:07:07.419 Allocate depth: 32 00:07:07.419 # threads/core: 1 00:07:07.419 Run time: 1 seconds 00:07:07.419 Verify: No 00:07:07.419 00:07:07.419 Running for 1 seconds... 00:07:07.419 00:07:07.419 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.419 ------------------------------------------------------------------------------------ 00:07:07.419 0,0 126432/s 501 MiB/s 0 0 00:07:07.419 ==================================================================================== 00:07:07.419 Total 126432/s 493 MiB/s 0 0' 00:07:07.419 04:03:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:07.419 04:03:08 -- accel/accel.sh@20 -- # IFS=: 00:07:07.419 04:03:08 -- accel/accel.sh@20 -- # read -r var val 00:07:07.419 04:03:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:07.419 04:03:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.419 04:03:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.419 04:03:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.419 04:03:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.419 04:03:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.419 04:03:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.419 04:03:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.419 04:03:08 -- accel/accel.sh@42 -- # jq -r . 00:07:07.419 [2024-11-26 04:03:09.002424] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.419 [2024-11-26 04:03:09.002681] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71049 ] 00:07:07.419 [2024-11-26 04:03:09.138158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.677 [2024-11-26 04:03:09.194799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.677 04:03:09 -- accel/accel.sh@21 -- # val= 00:07:07.677 04:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # IFS=: 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # read -r var val 00:07:07.677 04:03:09 -- accel/accel.sh@21 -- # val= 00:07:07.677 04:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # IFS=: 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # read -r var val 00:07:07.677 04:03:09 -- accel/accel.sh@21 -- # val=0x1 00:07:07.677 04:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # IFS=: 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # read -r var val 00:07:07.677 04:03:09 -- accel/accel.sh@21 -- # val= 00:07:07.677 04:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # IFS=: 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # read -r var val 00:07:07.677 04:03:09 -- accel/accel.sh@21 -- # val= 00:07:07.677 04:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # IFS=: 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # read -r var val 00:07:07.677 04:03:09 -- accel/accel.sh@21 -- # val=dif_verify 00:07:07.677 04:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.677 04:03:09 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # IFS=: 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # read -r var val 00:07:07.677 04:03:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.677 04:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # IFS=: 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # read -r var val 00:07:07.677 04:03:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.677 04:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # IFS=: 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # read -r var val 00:07:07.677 04:03:09 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:07.677 04:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # IFS=: 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # read -r var val 00:07:07.677 04:03:09 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:07.677 04:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # IFS=: 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # read -r var val 00:07:07.677 04:03:09 -- accel/accel.sh@21 -- # val= 00:07:07.677 04:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # IFS=: 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # read -r var val 00:07:07.677 04:03:09 -- accel/accel.sh@21 -- # val=software 00:07:07.677 04:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.677 04:03:09 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # IFS=: 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # read -r var val 00:07:07.677 04:03:09 -- accel/accel.sh@21 -- # val=32 00:07:07.677 04:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # IFS=: 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # read -r var val 00:07:07.677 04:03:09 -- accel/accel.sh@21 -- # val=32 00:07:07.677 04:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # IFS=: 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # read -r var val 00:07:07.677 04:03:09 -- accel/accel.sh@21 -- # val=1 00:07:07.677 04:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # IFS=: 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # read -r var val 00:07:07.677 04:03:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.677 04:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # IFS=: 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # read -r var val 00:07:07.677 04:03:09 -- accel/accel.sh@21 -- # val=No 00:07:07.677 04:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # IFS=: 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # read -r var val 00:07:07.677 04:03:09 -- accel/accel.sh@21 -- # val= 00:07:07.677 04:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # IFS=: 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # read -r var val 00:07:07.677 04:03:09 -- accel/accel.sh@21 -- # val= 00:07:07.677 04:03:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # IFS=: 00:07:07.677 04:03:09 -- accel/accel.sh@20 -- # read -r var val 00:07:09.053 04:03:10 -- accel/accel.sh@21 -- # val= 00:07:09.053 04:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.053 04:03:10 -- accel/accel.sh@20 -- # IFS=: 00:07:09.053 04:03:10 -- accel/accel.sh@20 -- # read -r var val 00:07:09.053 04:03:10 -- accel/accel.sh@21 -- # val= 00:07:09.053 04:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.053 04:03:10 -- accel/accel.sh@20 -- # IFS=: 00:07:09.053 04:03:10 -- accel/accel.sh@20 -- # read -r var val 00:07:09.053 04:03:10 -- accel/accel.sh@21 -- # val= 00:07:09.053 04:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.053 04:03:10 -- accel/accel.sh@20 -- # IFS=: 00:07:09.053 04:03:10 -- accel/accel.sh@20 -- # read -r var val 00:07:09.053 04:03:10 -- accel/accel.sh@21 -- # val= 00:07:09.053 04:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.053 04:03:10 -- accel/accel.sh@20 -- # IFS=: 00:07:09.053 04:03:10 -- accel/accel.sh@20 -- # read -r var val 00:07:09.053 04:03:10 -- accel/accel.sh@21 -- # val= 00:07:09.053 04:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.053 04:03:10 -- accel/accel.sh@20 -- # IFS=: 00:07:09.053 04:03:10 -- accel/accel.sh@20 -- # read -r var val 00:07:09.053 04:03:10 -- accel/accel.sh@21 -- # val= 00:07:09.053 04:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.053 04:03:10 -- accel/accel.sh@20 -- # IFS=: 00:07:09.053 04:03:10 -- accel/accel.sh@20 -- # read -r var val 00:07:09.053 04:03:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.053 04:03:10 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:09.053 04:03:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.053 00:07:09.053 real 0m2.852s 00:07:09.053 user 0m2.420s 00:07:09.053 sys 0m0.232s 00:07:09.053 ************************************ 00:07:09.053 END TEST accel_dif_verify 00:07:09.053 ************************************ 00:07:09.053 04:03:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.053 04:03:10 -- common/autotest_common.sh@10 -- # set +x 00:07:09.053 04:03:10 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:09.053 04:03:10 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:09.053 04:03:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.053 04:03:10 -- common/autotest_common.sh@10 -- # set +x 00:07:09.053 ************************************ 00:07:09.053 START TEST accel_dif_generate 00:07:09.053 ************************************ 00:07:09.053 04:03:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:09.053 04:03:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.053 04:03:10 -- accel/accel.sh@17 -- # local accel_module 00:07:09.053 04:03:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:09.053 04:03:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:09.053 04:03:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.053 04:03:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.053 04:03:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.053 04:03:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.053 04:03:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.053 04:03:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.053 04:03:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.053 04:03:10 -- accel/accel.sh@42 -- # jq -r . 00:07:09.053 [2024-11-26 04:03:10.465179] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.054 [2024-11-26 04:03:10.465274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71078 ] 00:07:09.054 [2024-11-26 04:03:10.603275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.054 [2024-11-26 04:03:10.667461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.431 04:03:11 -- accel/accel.sh@18 -- # out=' 00:07:10.431 SPDK Configuration: 00:07:10.431 Core mask: 0x1 00:07:10.431 00:07:10.431 Accel Perf Configuration: 00:07:10.431 Workload Type: dif_generate 00:07:10.431 Vector size: 4096 bytes 00:07:10.431 Transfer size: 4096 bytes 00:07:10.431 Block size: 512 bytes 00:07:10.431 Metadata size: 8 bytes 00:07:10.431 Vector count 1 00:07:10.431 Module: software 00:07:10.431 Queue depth: 32 00:07:10.431 Allocate depth: 32 00:07:10.431 # threads/core: 1 00:07:10.431 Run time: 1 seconds 00:07:10.431 Verify: No 00:07:10.431 00:07:10.431 Running for 1 seconds... 00:07:10.431 00:07:10.431 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.431 ------------------------------------------------------------------------------------ 00:07:10.431 0,0 152928/s 606 MiB/s 0 0 00:07:10.431 ==================================================================================== 00:07:10.431 Total 152928/s 597 MiB/s 0 0' 00:07:10.431 04:03:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:10.431 04:03:11 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 04:03:11 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 04:03:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:10.431 04:03:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.431 04:03:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.431 04:03:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.431 04:03:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.431 04:03:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.431 04:03:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.431 04:03:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.431 04:03:11 -- accel/accel.sh@42 -- # jq -r . 00:07:10.431 [2024-11-26 04:03:11.897232] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.431 [2024-11-26 04:03:11.897500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71092 ] 00:07:10.431 [2024-11-26 04:03:12.034583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.431 [2024-11-26 04:03:12.091159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.431 04:03:12 -- accel/accel.sh@21 -- # val= 00:07:10.431 04:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 04:03:12 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 04:03:12 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 04:03:12 -- accel/accel.sh@21 -- # val= 00:07:10.431 04:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 04:03:12 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 04:03:12 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 04:03:12 -- accel/accel.sh@21 -- # val=0x1 00:07:10.431 04:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 04:03:12 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 04:03:12 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 04:03:12 -- accel/accel.sh@21 -- # val= 00:07:10.431 04:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 04:03:12 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 04:03:12 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 04:03:12 -- accel/accel.sh@21 -- # val= 00:07:10.431 04:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 04:03:12 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 04:03:12 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 04:03:12 -- accel/accel.sh@21 -- # val=dif_generate 00:07:10.431 04:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 04:03:12 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:10.431 04:03:12 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 04:03:12 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 04:03:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.431 04:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 04:03:12 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 04:03:12 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 04:03:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.431 04:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 04:03:12 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 04:03:12 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 04:03:12 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:10.431 04:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 04:03:12 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 04:03:12 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 04:03:12 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:10.431 04:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 04:03:12 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 04:03:12 -- accel/accel.sh@20 -- # read -r var val 00:07:10.432 04:03:12 -- accel/accel.sh@21 -- # val= 00:07:10.432 04:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.432 04:03:12 -- accel/accel.sh@20 -- # IFS=: 00:07:10.432 04:03:12 -- accel/accel.sh@20 -- # read -r var val 00:07:10.432 04:03:12 -- accel/accel.sh@21 -- # val=software 00:07:10.432 04:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.432 04:03:12 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.432 04:03:12 -- accel/accel.sh@20 -- # IFS=: 00:07:10.432 04:03:12 -- accel/accel.sh@20 -- # read -r var val 00:07:10.432 04:03:12 -- accel/accel.sh@21 -- # val=32 00:07:10.432 04:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.432 04:03:12 -- accel/accel.sh@20 -- # IFS=: 00:07:10.432 04:03:12 -- accel/accel.sh@20 -- # read -r var val 00:07:10.432 04:03:12 -- accel/accel.sh@21 -- # val=32 00:07:10.432 04:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.432 04:03:12 -- accel/accel.sh@20 -- # IFS=: 00:07:10.432 04:03:12 -- accel/accel.sh@20 -- # read -r var val 00:07:10.432 04:03:12 -- accel/accel.sh@21 -- # val=1 00:07:10.432 04:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.432 04:03:12 -- accel/accel.sh@20 -- # IFS=: 00:07:10.432 04:03:12 -- accel/accel.sh@20 -- # read -r var val 00:07:10.432 04:03:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.432 04:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.432 04:03:12 -- accel/accel.sh@20 -- # IFS=: 00:07:10.432 04:03:12 -- accel/accel.sh@20 -- # read -r var val 00:07:10.432 04:03:12 -- accel/accel.sh@21 -- # val=No 00:07:10.432 04:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.432 04:03:12 -- accel/accel.sh@20 -- # IFS=: 00:07:10.432 04:03:12 -- accel/accel.sh@20 -- # read -r var val 00:07:10.432 04:03:12 -- accel/accel.sh@21 -- # val= 00:07:10.432 04:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.432 04:03:12 -- accel/accel.sh@20 -- # IFS=: 00:07:10.432 04:03:12 -- accel/accel.sh@20 -- # read -r var val 00:07:10.432 04:03:12 -- accel/accel.sh@21 -- # val= 00:07:10.432 04:03:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.432 04:03:12 -- accel/accel.sh@20 -- # IFS=: 00:07:10.432 04:03:12 -- accel/accel.sh@20 -- # read -r var val 00:07:11.809 04:03:13 -- accel/accel.sh@21 -- # val= 00:07:11.809 04:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.809 04:03:13 -- accel/accel.sh@20 -- # IFS=: 00:07:11.809 04:03:13 -- accel/accel.sh@20 -- # read -r var val 00:07:11.809 04:03:13 -- accel/accel.sh@21 -- # val= 00:07:11.809 04:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.809 04:03:13 -- accel/accel.sh@20 -- # IFS=: 00:07:11.809 04:03:13 -- accel/accel.sh@20 -- # read -r var val 00:07:11.809 04:03:13 -- accel/accel.sh@21 -- # val= 00:07:11.809 04:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.809 04:03:13 -- accel/accel.sh@20 -- # IFS=: 00:07:11.809 04:03:13 -- accel/accel.sh@20 -- # read -r var val 00:07:11.809 04:03:13 -- accel/accel.sh@21 -- # val= 00:07:11.809 04:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.809 04:03:13 -- accel/accel.sh@20 -- # IFS=: 00:07:11.809 04:03:13 -- accel/accel.sh@20 -- # read -r var val 00:07:11.809 04:03:13 -- accel/accel.sh@21 -- # val= 00:07:11.809 04:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.809 04:03:13 -- accel/accel.sh@20 -- # IFS=: 00:07:11.809 04:03:13 -- accel/accel.sh@20 -- # read -r var val 00:07:11.809 04:03:13 -- accel/accel.sh@21 -- # val= 00:07:11.809 04:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.809 04:03:13 -- accel/accel.sh@20 -- # IFS=: 00:07:11.809 04:03:13 -- accel/accel.sh@20 -- # read -r var val 00:07:11.809 04:03:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:11.809 04:03:13 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:11.809 04:03:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.809 00:07:11.809 real 0m2.920s 00:07:11.809 user 0m2.483s 00:07:11.809 sys 0m0.235s 00:07:11.809 04:03:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.809 04:03:13 -- common/autotest_common.sh@10 -- # set +x 00:07:11.809 ************************************ 00:07:11.809 END TEST accel_dif_generate 00:07:11.809 ************************************ 00:07:11.809 04:03:13 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:11.809 04:03:13 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:11.809 04:03:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.809 04:03:13 -- common/autotest_common.sh@10 -- # set +x 00:07:11.809 ************************************ 00:07:11.809 START TEST accel_dif_generate_copy 00:07:11.809 ************************************ 00:07:11.809 04:03:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:11.810 04:03:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.810 04:03:13 -- accel/accel.sh@17 -- # local accel_module 00:07:11.810 04:03:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:11.810 04:03:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:11.810 04:03:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.810 04:03:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.810 04:03:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.810 04:03:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.810 04:03:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.810 04:03:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.810 04:03:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.810 04:03:13 -- accel/accel.sh@42 -- # jq -r . 00:07:11.810 [2024-11-26 04:03:13.438594] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:11.810 [2024-11-26 04:03:13.438673] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71132 ] 00:07:11.810 [2024-11-26 04:03:13.566311] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.071 [2024-11-26 04:03:13.640510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.454 04:03:14 -- accel/accel.sh@18 -- # out=' 00:07:13.454 SPDK Configuration: 00:07:13.454 Core mask: 0x1 00:07:13.454 00:07:13.454 Accel Perf Configuration: 00:07:13.454 Workload Type: dif_generate_copy 00:07:13.454 Vector size: 4096 bytes 00:07:13.454 Transfer size: 4096 bytes 00:07:13.454 Vector count 1 00:07:13.454 Module: software 00:07:13.454 Queue depth: 32 00:07:13.454 Allocate depth: 32 00:07:13.454 # threads/core: 1 00:07:13.454 Run time: 1 seconds 00:07:13.454 Verify: No 00:07:13.454 00:07:13.454 Running for 1 seconds... 00:07:13.454 00:07:13.454 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.454 ------------------------------------------------------------------------------------ 00:07:13.454 0,0 118592/s 470 MiB/s 0 0 00:07:13.454 ==================================================================================== 00:07:13.454 Total 118592/s 463 MiB/s 0 0' 00:07:13.454 04:03:14 -- accel/accel.sh@20 -- # IFS=: 00:07:13.454 04:03:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:13.454 04:03:14 -- accel/accel.sh@20 -- # read -r var val 00:07:13.454 04:03:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:13.454 04:03:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.454 04:03:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.454 04:03:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.454 04:03:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.454 04:03:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.454 04:03:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.454 04:03:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.454 04:03:14 -- accel/accel.sh@42 -- # jq -r . 00:07:13.454 [2024-11-26 04:03:14.916761] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.454 [2024-11-26 04:03:14.917192] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71146 ] 00:07:13.454 [2024-11-26 04:03:15.053767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.454 [2024-11-26 04:03:15.118423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.454 04:03:15 -- accel/accel.sh@21 -- # val= 00:07:13.454 04:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:13.454 04:03:15 -- accel/accel.sh@21 -- # val= 00:07:13.454 04:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:13.454 04:03:15 -- accel/accel.sh@21 -- # val=0x1 00:07:13.454 04:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:13.454 04:03:15 -- accel/accel.sh@21 -- # val= 00:07:13.454 04:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:13.454 04:03:15 -- accel/accel.sh@21 -- # val= 00:07:13.454 04:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:13.454 04:03:15 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:13.454 04:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.454 04:03:15 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:13.454 04:03:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.454 04:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:13.454 04:03:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.454 04:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:13.454 04:03:15 -- accel/accel.sh@21 -- # val= 00:07:13.454 04:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:13.454 04:03:15 -- accel/accel.sh@21 -- # val=software 00:07:13.454 04:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.454 04:03:15 -- accel/accel.sh@23 -- # accel_module=software 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:13.454 04:03:15 -- accel/accel.sh@21 -- # val=32 00:07:13.454 04:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:13.454 04:03:15 -- accel/accel.sh@21 -- # val=32 00:07:13.454 04:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:13.454 04:03:15 -- accel/accel.sh@21 -- # val=1 00:07:13.454 04:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:13.454 04:03:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.454 04:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:13.454 04:03:15 -- accel/accel.sh@21 -- # val=No 00:07:13.454 04:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:13.454 04:03:15 -- accel/accel.sh@21 -- # val= 00:07:13.454 04:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:13.454 04:03:15 -- accel/accel.sh@21 -- # val= 00:07:13.454 04:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:13.454 04:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:14.829 04:03:16 -- accel/accel.sh@21 -- # val= 00:07:14.829 04:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.829 04:03:16 -- accel/accel.sh@20 -- # IFS=: 00:07:14.829 04:03:16 -- accel/accel.sh@20 -- # read -r var val 00:07:14.829 04:03:16 -- accel/accel.sh@21 -- # val= 00:07:14.829 04:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.829 04:03:16 -- accel/accel.sh@20 -- # IFS=: 00:07:14.829 04:03:16 -- accel/accel.sh@20 -- # read -r var val 00:07:14.829 04:03:16 -- accel/accel.sh@21 -- # val= 00:07:14.829 04:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.829 04:03:16 -- accel/accel.sh@20 -- # IFS=: 00:07:14.829 04:03:16 -- accel/accel.sh@20 -- # read -r var val 00:07:14.829 04:03:16 -- accel/accel.sh@21 -- # val= 00:07:14.830 04:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.830 04:03:16 -- accel/accel.sh@20 -- # IFS=: 00:07:14.830 04:03:16 -- accel/accel.sh@20 -- # read -r var val 00:07:14.830 04:03:16 -- accel/accel.sh@21 -- # val= 00:07:14.830 04:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.830 04:03:16 -- accel/accel.sh@20 -- # IFS=: 00:07:14.830 04:03:16 -- accel/accel.sh@20 -- # read -r var val 00:07:14.830 04:03:16 -- accel/accel.sh@21 -- # val= 00:07:14.830 04:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.830 04:03:16 -- accel/accel.sh@20 -- # IFS=: 00:07:14.830 04:03:16 -- accel/accel.sh@20 -- # read -r var val 00:07:14.830 04:03:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.830 ************************************ 00:07:14.830 END TEST accel_dif_generate_copy 00:07:14.830 ************************************ 00:07:14.830 04:03:16 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:14.830 04:03:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.830 00:07:14.830 real 0m2.954s 00:07:14.830 user 0m2.481s 00:07:14.830 sys 0m0.269s 00:07:14.830 04:03:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.830 04:03:16 -- common/autotest_common.sh@10 -- # set +x 00:07:14.830 04:03:16 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:14.830 04:03:16 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:14.830 04:03:16 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:14.830 04:03:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.830 04:03:16 -- common/autotest_common.sh@10 -- # set +x 00:07:14.830 ************************************ 00:07:14.830 START TEST accel_comp 00:07:14.830 ************************************ 00:07:14.830 04:03:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:14.830 04:03:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.830 04:03:16 -- accel/accel.sh@17 -- # local accel_module 00:07:14.830 04:03:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:14.830 04:03:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:14.830 04:03:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.830 04:03:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.830 04:03:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.830 04:03:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.830 04:03:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.830 04:03:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.830 04:03:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.830 04:03:16 -- accel/accel.sh@42 -- # jq -r . 00:07:14.830 [2024-11-26 04:03:16.443098] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:14.830 [2024-11-26 04:03:16.443178] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71186 ] 00:07:14.830 [2024-11-26 04:03:16.571464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.088 [2024-11-26 04:03:16.648082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.464 04:03:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:16.464 00:07:16.464 SPDK Configuration: 00:07:16.464 Core mask: 0x1 00:07:16.464 00:07:16.464 Accel Perf Configuration: 00:07:16.464 Workload Type: compress 00:07:16.464 Transfer size: 4096 bytes 00:07:16.464 Vector count 1 00:07:16.464 Module: software 00:07:16.464 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:16.464 Queue depth: 32 00:07:16.464 Allocate depth: 32 00:07:16.464 # threads/core: 1 00:07:16.464 Run time: 1 seconds 00:07:16.464 Verify: No 00:07:16.464 00:07:16.464 Running for 1 seconds... 00:07:16.464 00:07:16.464 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.464 ------------------------------------------------------------------------------------ 00:07:16.464 0,0 59840/s 249 MiB/s 0 0 00:07:16.464 ==================================================================================== 00:07:16.464 Total 59840/s 233 MiB/s 0 0' 00:07:16.464 04:03:17 -- accel/accel.sh@20 -- # IFS=: 00:07:16.464 04:03:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:16.464 04:03:17 -- accel/accel.sh@20 -- # read -r var val 00:07:16.464 04:03:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.464 04:03:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:16.464 04:03:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.464 04:03:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.464 04:03:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.464 04:03:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.464 04:03:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.464 04:03:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.464 04:03:17 -- accel/accel.sh@42 -- # jq -r . 00:07:16.464 [2024-11-26 04:03:17.957427] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.464 [2024-11-26 04:03:17.957684] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71200 ] 00:07:16.464 [2024-11-26 04:03:18.094618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.464 [2024-11-26 04:03:18.161890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.723 04:03:18 -- accel/accel.sh@21 -- # val= 00:07:16.723 04:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.723 04:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.723 04:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:16.723 04:03:18 -- accel/accel.sh@21 -- # val= 00:07:16.723 04:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.723 04:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.723 04:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:16.723 04:03:18 -- accel/accel.sh@21 -- # val= 00:07:16.724 04:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:16.724 04:03:18 -- accel/accel.sh@21 -- # val=0x1 00:07:16.724 04:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:16.724 04:03:18 -- accel/accel.sh@21 -- # val= 00:07:16.724 04:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:16.724 04:03:18 -- accel/accel.sh@21 -- # val= 00:07:16.724 04:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:16.724 04:03:18 -- accel/accel.sh@21 -- # val=compress 00:07:16.724 04:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.724 04:03:18 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:16.724 04:03:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:16.724 04:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:16.724 04:03:18 -- accel/accel.sh@21 -- # val= 00:07:16.724 04:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:16.724 04:03:18 -- accel/accel.sh@21 -- # val=software 00:07:16.724 04:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.724 04:03:18 -- accel/accel.sh@23 -- # accel_module=software 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:16.724 04:03:18 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:16.724 04:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:16.724 04:03:18 -- accel/accel.sh@21 -- # val=32 00:07:16.724 04:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:16.724 04:03:18 -- accel/accel.sh@21 -- # val=32 00:07:16.724 04:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:16.724 04:03:18 -- accel/accel.sh@21 -- # val=1 00:07:16.724 04:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:16.724 04:03:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:16.724 04:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:16.724 04:03:18 -- accel/accel.sh@21 -- # val=No 00:07:16.724 04:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:16.724 04:03:18 -- accel/accel.sh@21 -- # val= 00:07:16.724 04:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:16.724 04:03:18 -- accel/accel.sh@21 -- # val= 00:07:16.724 04:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.724 04:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:17.661 04:03:19 -- accel/accel.sh@21 -- # val= 00:07:17.661 04:03:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.661 04:03:19 -- accel/accel.sh@20 -- # IFS=: 00:07:17.661 04:03:19 -- accel/accel.sh@20 -- # read -r var val 00:07:17.661 04:03:19 -- accel/accel.sh@21 -- # val= 00:07:17.661 04:03:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.661 04:03:19 -- accel/accel.sh@20 -- # IFS=: 00:07:17.661 04:03:19 -- accel/accel.sh@20 -- # read -r var val 00:07:17.661 04:03:19 -- accel/accel.sh@21 -- # val= 00:07:17.661 04:03:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.661 04:03:19 -- accel/accel.sh@20 -- # IFS=: 00:07:17.661 04:03:19 -- accel/accel.sh@20 -- # read -r var val 00:07:17.661 04:03:19 -- accel/accel.sh@21 -- # val= 00:07:17.661 04:03:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.661 04:03:19 -- accel/accel.sh@20 -- # IFS=: 00:07:17.661 04:03:19 -- accel/accel.sh@20 -- # read -r var val 00:07:17.661 04:03:19 -- accel/accel.sh@21 -- # val= 00:07:17.661 04:03:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.661 04:03:19 -- accel/accel.sh@20 -- # IFS=: 00:07:17.661 04:03:19 -- accel/accel.sh@20 -- # read -r var val 00:07:17.661 04:03:19 -- accel/accel.sh@21 -- # val= 00:07:17.661 04:03:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.661 04:03:19 -- accel/accel.sh@20 -- # IFS=: 00:07:17.661 04:03:19 -- accel/accel.sh@20 -- # read -r var val 00:07:17.661 04:03:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.661 04:03:19 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:17.661 04:03:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.661 00:07:17.661 real 0m2.998s 00:07:17.661 user 0m2.536s 00:07:17.661 sys 0m0.258s 00:07:17.661 04:03:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.661 ************************************ 00:07:17.661 END TEST accel_comp 00:07:17.661 ************************************ 00:07:17.661 04:03:19 -- common/autotest_common.sh@10 -- # set +x 00:07:17.920 04:03:19 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:17.920 04:03:19 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:17.920 04:03:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.920 04:03:19 -- common/autotest_common.sh@10 -- # set +x 00:07:17.920 ************************************ 00:07:17.920 START TEST accel_decomp 00:07:17.920 ************************************ 00:07:17.920 04:03:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:17.920 04:03:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.920 04:03:19 -- accel/accel.sh@17 -- # local accel_module 00:07:17.920 04:03:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:17.920 04:03:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:17.920 04:03:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.920 04:03:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.920 04:03:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.920 04:03:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.920 04:03:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.920 04:03:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.920 04:03:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.920 04:03:19 -- accel/accel.sh@42 -- # jq -r . 00:07:17.920 [2024-11-26 04:03:19.494849] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:17.920 [2024-11-26 04:03:19.494950] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71240 ] 00:07:17.920 [2024-11-26 04:03:19.632451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.179 [2024-11-26 04:03:19.707273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.557 04:03:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:19.557 00:07:19.557 SPDK Configuration: 00:07:19.557 Core mask: 0x1 00:07:19.557 00:07:19.557 Accel Perf Configuration: 00:07:19.557 Workload Type: decompress 00:07:19.557 Transfer size: 4096 bytes 00:07:19.557 Vector count 1 00:07:19.557 Module: software 00:07:19.557 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.557 Queue depth: 32 00:07:19.557 Allocate depth: 32 00:07:19.557 # threads/core: 1 00:07:19.557 Run time: 1 seconds 00:07:19.557 Verify: Yes 00:07:19.557 00:07:19.557 Running for 1 seconds... 00:07:19.557 00:07:19.557 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.557 ------------------------------------------------------------------------------------ 00:07:19.557 0,0 85824/s 158 MiB/s 0 0 00:07:19.557 ==================================================================================== 00:07:19.557 Total 85824/s 335 MiB/s 0 0' 00:07:19.557 04:03:20 -- accel/accel.sh@20 -- # IFS=: 00:07:19.557 04:03:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:19.557 04:03:20 -- accel/accel.sh@20 -- # read -r var val 00:07:19.557 04:03:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:19.557 04:03:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.557 04:03:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.557 04:03:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.557 04:03:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.557 04:03:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.557 04:03:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.557 04:03:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.557 04:03:20 -- accel/accel.sh@42 -- # jq -r . 00:07:19.557 [2024-11-26 04:03:20.982737] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.557 [2024-11-26 04:03:20.982834] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71254 ] 00:07:19.557 [2024-11-26 04:03:21.111375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.557 [2024-11-26 04:03:21.175276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.557 04:03:21 -- accel/accel.sh@21 -- # val= 00:07:19.557 04:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.557 04:03:21 -- accel/accel.sh@21 -- # val= 00:07:19.557 04:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.557 04:03:21 -- accel/accel.sh@21 -- # val= 00:07:19.557 04:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.557 04:03:21 -- accel/accel.sh@21 -- # val=0x1 00:07:19.557 04:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.557 04:03:21 -- accel/accel.sh@21 -- # val= 00:07:19.557 04:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.557 04:03:21 -- accel/accel.sh@21 -- # val= 00:07:19.557 04:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.557 04:03:21 -- accel/accel.sh@21 -- # val=decompress 00:07:19.557 04:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.557 04:03:21 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.557 04:03:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:19.557 04:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.557 04:03:21 -- accel/accel.sh@21 -- # val= 00:07:19.557 04:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.557 04:03:21 -- accel/accel.sh@21 -- # val=software 00:07:19.557 04:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.557 04:03:21 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.557 04:03:21 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.557 04:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.557 04:03:21 -- accel/accel.sh@21 -- # val=32 00:07:19.557 04:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.557 04:03:21 -- accel/accel.sh@21 -- # val=32 00:07:19.557 04:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.557 04:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.558 04:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.558 04:03:21 -- accel/accel.sh@21 -- # val=1 00:07:19.558 04:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.558 04:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.558 04:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.558 04:03:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.558 04:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.558 04:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.558 04:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.558 04:03:21 -- accel/accel.sh@21 -- # val=Yes 00:07:19.558 04:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.558 04:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.558 04:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.558 04:03:21 -- accel/accel.sh@21 -- # val= 00:07:19.558 04:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.558 04:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.558 04:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.558 04:03:21 -- accel/accel.sh@21 -- # val= 00:07:19.558 04:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.558 04:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.558 04:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:20.936 04:03:22 -- accel/accel.sh@21 -- # val= 00:07:20.936 04:03:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.936 04:03:22 -- accel/accel.sh@20 -- # IFS=: 00:07:20.936 04:03:22 -- accel/accel.sh@20 -- # read -r var val 00:07:20.936 04:03:22 -- accel/accel.sh@21 -- # val= 00:07:20.936 04:03:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.936 04:03:22 -- accel/accel.sh@20 -- # IFS=: 00:07:20.936 04:03:22 -- accel/accel.sh@20 -- # read -r var val 00:07:20.936 04:03:22 -- accel/accel.sh@21 -- # val= 00:07:20.936 04:03:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.936 04:03:22 -- accel/accel.sh@20 -- # IFS=: 00:07:20.936 04:03:22 -- accel/accel.sh@20 -- # read -r var val 00:07:20.936 04:03:22 -- accel/accel.sh@21 -- # val= 00:07:20.936 04:03:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.936 04:03:22 -- accel/accel.sh@20 -- # IFS=: 00:07:20.936 04:03:22 -- accel/accel.sh@20 -- # read -r var val 00:07:20.936 04:03:22 -- accel/accel.sh@21 -- # val= 00:07:20.936 04:03:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.936 04:03:22 -- accel/accel.sh@20 -- # IFS=: 00:07:20.936 04:03:22 -- accel/accel.sh@20 -- # read -r var val 00:07:20.936 04:03:22 -- accel/accel.sh@21 -- # val= 00:07:20.936 ************************************ 00:07:20.936 END TEST accel_decomp 00:07:20.936 ************************************ 00:07:20.936 04:03:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.936 04:03:22 -- accel/accel.sh@20 -- # IFS=: 00:07:20.936 04:03:22 -- accel/accel.sh@20 -- # read -r var val 00:07:20.936 04:03:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.936 04:03:22 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:20.936 04:03:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.936 00:07:20.936 real 0m2.960s 00:07:20.936 user 0m2.483s 00:07:20.936 sys 0m0.276s 00:07:20.936 04:03:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.936 04:03:22 -- common/autotest_common.sh@10 -- # set +x 00:07:20.936 04:03:22 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:20.936 04:03:22 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:20.936 04:03:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.936 04:03:22 -- common/autotest_common.sh@10 -- # set +x 00:07:20.936 ************************************ 00:07:20.936 START TEST accel_decmop_full 00:07:20.936 ************************************ 00:07:20.936 04:03:22 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:20.936 04:03:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.936 04:03:22 -- accel/accel.sh@17 -- # local accel_module 00:07:20.936 04:03:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:20.936 04:03:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:20.936 04:03:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.936 04:03:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.936 04:03:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.936 04:03:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.936 04:03:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.936 04:03:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.936 04:03:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.936 04:03:22 -- accel/accel.sh@42 -- # jq -r . 00:07:20.936 [2024-11-26 04:03:22.503899] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.936 [2024-11-26 04:03:22.504149] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71293 ] 00:07:20.936 [2024-11-26 04:03:22.639956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.195 [2024-11-26 04:03:22.714217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.570 04:03:23 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:22.570 00:07:22.570 SPDK Configuration: 00:07:22.570 Core mask: 0x1 00:07:22.570 00:07:22.570 Accel Perf Configuration: 00:07:22.570 Workload Type: decompress 00:07:22.570 Transfer size: 111250 bytes 00:07:22.570 Vector count 1 00:07:22.570 Module: software 00:07:22.570 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:22.570 Queue depth: 32 00:07:22.570 Allocate depth: 32 00:07:22.570 # threads/core: 1 00:07:22.570 Run time: 1 seconds 00:07:22.570 Verify: Yes 00:07:22.570 00:07:22.570 Running for 1 seconds... 00:07:22.570 00:07:22.570 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:22.570 ------------------------------------------------------------------------------------ 00:07:22.570 0,0 5728/s 236 MiB/s 0 0 00:07:22.570 ==================================================================================== 00:07:22.570 Total 5728/s 607 MiB/s 0 0' 00:07:22.570 04:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:22.570 04:03:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:22.570 04:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:22.570 04:03:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:22.570 04:03:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.570 04:03:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.570 04:03:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.570 04:03:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.570 04:03:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.570 04:03:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.570 04:03:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.570 04:03:23 -- accel/accel.sh@42 -- # jq -r . 00:07:22.570 [2024-11-26 04:03:23.998440] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:22.570 [2024-11-26 04:03:23.998674] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71308 ] 00:07:22.570 [2024-11-26 04:03:24.135197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.570 [2024-11-26 04:03:24.201931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.570 04:03:24 -- accel/accel.sh@21 -- # val= 00:07:22.570 04:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.570 04:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.570 04:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.570 04:03:24 -- accel/accel.sh@21 -- # val= 00:07:22.570 04:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.570 04:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.570 04:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.570 04:03:24 -- accel/accel.sh@21 -- # val= 00:07:22.570 04:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.570 04:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.570 04:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.570 04:03:24 -- accel/accel.sh@21 -- # val=0x1 00:07:22.570 04:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.570 04:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.570 04:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.570 04:03:24 -- accel/accel.sh@21 -- # val= 00:07:22.570 04:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.570 04:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.570 04:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.570 04:03:24 -- accel/accel.sh@21 -- # val= 00:07:22.570 04:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.570 04:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.570 04:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.570 04:03:24 -- accel/accel.sh@21 -- # val=decompress 00:07:22.570 04:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.570 04:03:24 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:22.570 04:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.570 04:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.570 04:03:24 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:22.570 04:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.570 04:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.570 04:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.570 04:03:24 -- accel/accel.sh@21 -- # val= 00:07:22.570 04:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.570 04:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.570 04:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.570 04:03:24 -- accel/accel.sh@21 -- # val=software 00:07:22.571 04:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.571 04:03:24 -- accel/accel.sh@23 -- # accel_module=software 00:07:22.571 04:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.571 04:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.571 04:03:24 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:22.571 04:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.571 04:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.571 04:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.571 04:03:24 -- accel/accel.sh@21 -- # val=32 00:07:22.571 04:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.571 04:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.571 04:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.571 04:03:24 -- accel/accel.sh@21 -- # val=32 00:07:22.571 04:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.571 04:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.571 04:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.571 04:03:24 -- accel/accel.sh@21 -- # val=1 00:07:22.571 04:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.571 04:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.571 04:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.571 04:03:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:22.571 04:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.571 04:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.571 04:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.571 04:03:24 -- accel/accel.sh@21 -- # val=Yes 00:07:22.571 04:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.571 04:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.571 04:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.571 04:03:24 -- accel/accel.sh@21 -- # val= 00:07:22.571 04:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.571 04:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.571 04:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:22.571 04:03:24 -- accel/accel.sh@21 -- # val= 00:07:22.571 04:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.571 04:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:22.571 04:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:23.994 04:03:25 -- accel/accel.sh@21 -- # val= 00:07:23.994 04:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.994 04:03:25 -- accel/accel.sh@20 -- # IFS=: 00:07:23.994 04:03:25 -- accel/accel.sh@20 -- # read -r var val 00:07:23.994 04:03:25 -- accel/accel.sh@21 -- # val= 00:07:23.994 04:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.994 04:03:25 -- accel/accel.sh@20 -- # IFS=: 00:07:23.994 04:03:25 -- accel/accel.sh@20 -- # read -r var val 00:07:23.994 04:03:25 -- accel/accel.sh@21 -- # val= 00:07:23.994 04:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.994 04:03:25 -- accel/accel.sh@20 -- # IFS=: 00:07:23.994 04:03:25 -- accel/accel.sh@20 -- # read -r var val 00:07:23.994 04:03:25 -- accel/accel.sh@21 -- # val= 00:07:23.994 04:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.994 04:03:25 -- accel/accel.sh@20 -- # IFS=: 00:07:23.994 04:03:25 -- accel/accel.sh@20 -- # read -r var val 00:07:23.994 04:03:25 -- accel/accel.sh@21 -- # val= 00:07:23.994 04:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.994 04:03:25 -- accel/accel.sh@20 -- # IFS=: 00:07:23.994 04:03:25 -- accel/accel.sh@20 -- # read -r var val 00:07:23.994 ************************************ 00:07:23.994 END TEST accel_decmop_full 00:07:23.994 ************************************ 00:07:23.994 04:03:25 -- accel/accel.sh@21 -- # val= 00:07:23.994 04:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.994 04:03:25 -- accel/accel.sh@20 -- # IFS=: 00:07:23.994 04:03:25 -- accel/accel.sh@20 -- # read -r var val 00:07:23.994 04:03:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:23.994 04:03:25 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:23.994 04:03:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.994 00:07:23.994 real 0m2.994s 00:07:23.994 user 0m2.525s 00:07:23.994 sys 0m0.264s 00:07:23.994 04:03:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:23.994 04:03:25 -- common/autotest_common.sh@10 -- # set +x 00:07:23.994 04:03:25 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:23.994 04:03:25 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:23.994 04:03:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.994 04:03:25 -- common/autotest_common.sh@10 -- # set +x 00:07:23.994 ************************************ 00:07:23.994 START TEST accel_decomp_mcore 00:07:23.994 ************************************ 00:07:23.994 04:03:25 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:23.994 04:03:25 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.994 04:03:25 -- accel/accel.sh@17 -- # local accel_module 00:07:23.994 04:03:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:23.994 04:03:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:23.994 04:03:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.994 04:03:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.994 04:03:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.994 04:03:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.994 04:03:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.994 04:03:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.994 04:03:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.994 04:03:25 -- accel/accel.sh@42 -- # jq -r . 00:07:23.994 [2024-11-26 04:03:25.553805] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:23.994 [2024-11-26 04:03:25.554534] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71343 ] 00:07:23.994 [2024-11-26 04:03:25.691503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.264 [2024-11-26 04:03:25.772446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.264 [2024-11-26 04:03:25.772593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.264 [2024-11-26 04:03:25.772728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.264 [2024-11-26 04:03:25.773254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.643 04:03:27 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:25.643 00:07:25.643 SPDK Configuration: 00:07:25.643 Core mask: 0xf 00:07:25.643 00:07:25.643 Accel Perf Configuration: 00:07:25.643 Workload Type: decompress 00:07:25.643 Transfer size: 4096 bytes 00:07:25.643 Vector count 1 00:07:25.643 Module: software 00:07:25.643 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.643 Queue depth: 32 00:07:25.643 Allocate depth: 32 00:07:25.643 # threads/core: 1 00:07:25.643 Run time: 1 seconds 00:07:25.643 Verify: Yes 00:07:25.643 00:07:25.643 Running for 1 seconds... 00:07:25.643 00:07:25.643 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:25.643 ------------------------------------------------------------------------------------ 00:07:25.643 0,0 59264/s 109 MiB/s 0 0 00:07:25.643 3,0 53824/s 99 MiB/s 0 0 00:07:25.643 2,0 52960/s 97 MiB/s 0 0 00:07:25.643 1,0 54080/s 99 MiB/s 0 0 00:07:25.643 ==================================================================================== 00:07:25.643 Total 220128/s 859 MiB/s 0 0' 00:07:25.643 04:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:25.643 04:03:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:25.643 04:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:25.643 04:03:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:25.643 04:03:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.643 04:03:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.643 04:03:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.643 04:03:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.643 04:03:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.643 04:03:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.643 04:03:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.643 04:03:27 -- accel/accel.sh@42 -- # jq -r . 00:07:25.643 [2024-11-26 04:03:27.063307] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:25.643 [2024-11-26 04:03:27.063550] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71365 ] 00:07:25.643 [2024-11-26 04:03:27.200132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.643 [2024-11-26 04:03:27.264603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.643 [2024-11-26 04:03:27.264762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.643 [2024-11-26 04:03:27.265737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.643 [2024-11-26 04:03:27.265786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.643 04:03:27 -- accel/accel.sh@21 -- # val= 00:07:25.643 04:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.643 04:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:25.643 04:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:25.643 04:03:27 -- accel/accel.sh@21 -- # val= 00:07:25.643 04:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.643 04:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:25.643 04:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:25.643 04:03:27 -- accel/accel.sh@21 -- # val= 00:07:25.643 04:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.643 04:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:25.643 04:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:25.643 04:03:27 -- accel/accel.sh@21 -- # val=0xf 00:07:25.643 04:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.643 04:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:25.643 04:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:25.643 04:03:27 -- accel/accel.sh@21 -- # val= 00:07:25.643 04:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.643 04:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:25.643 04:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:25.643 04:03:27 -- accel/accel.sh@21 -- # val= 00:07:25.643 04:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.643 04:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:25.643 04:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:25.643 04:03:27 -- accel/accel.sh@21 -- # val=decompress 00:07:25.643 04:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.643 04:03:27 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:25.643 04:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:25.643 04:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:25.643 04:03:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.643 04:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.644 04:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:25.644 04:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:25.644 04:03:27 -- accel/accel.sh@21 -- # val= 00:07:25.644 04:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.644 04:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:25.644 04:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:25.644 04:03:27 -- accel/accel.sh@21 -- # val=software 00:07:25.644 04:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.644 04:03:27 -- accel/accel.sh@23 -- # accel_module=software 00:07:25.644 04:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:25.644 04:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:25.644 04:03:27 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.644 04:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.644 04:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:25.644 04:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:25.644 04:03:27 -- accel/accel.sh@21 -- # val=32 00:07:25.644 04:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.644 04:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:25.644 04:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:25.644 04:03:27 -- accel/accel.sh@21 -- # val=32 00:07:25.644 04:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.644 04:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:25.644 04:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:25.644 04:03:27 -- accel/accel.sh@21 -- # val=1 00:07:25.644 04:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.644 04:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:25.644 04:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:25.644 04:03:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:25.644 04:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.644 04:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:25.644 04:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:25.644 04:03:27 -- accel/accel.sh@21 -- # val=Yes 00:07:25.644 04:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.644 04:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:25.644 04:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:25.644 04:03:27 -- accel/accel.sh@21 -- # val= 00:07:25.644 04:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.644 04:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:25.644 04:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:25.644 04:03:27 -- accel/accel.sh@21 -- # val= 00:07:25.644 04:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.644 04:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:25.644 04:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:27.022 04:03:28 -- accel/accel.sh@21 -- # val= 00:07:27.022 04:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.022 04:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:27.022 04:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:27.022 04:03:28 -- accel/accel.sh@21 -- # val= 00:07:27.022 04:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.022 04:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:27.022 04:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:27.022 04:03:28 -- accel/accel.sh@21 -- # val= 00:07:27.022 04:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.022 04:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:27.022 04:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:27.022 04:03:28 -- accel/accel.sh@21 -- # val= 00:07:27.022 04:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.022 04:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:27.022 04:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:27.022 04:03:28 -- accel/accel.sh@21 -- # val= 00:07:27.022 04:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.022 04:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:27.022 04:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:27.022 04:03:28 -- accel/accel.sh@21 -- # val= 00:07:27.022 04:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.022 04:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:27.022 04:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:27.022 04:03:28 -- accel/accel.sh@21 -- # val= 00:07:27.022 04:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.022 04:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:27.022 04:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:27.022 04:03:28 -- accel/accel.sh@21 -- # val= 00:07:27.022 04:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.022 04:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:27.022 04:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:27.022 04:03:28 -- accel/accel.sh@21 -- # val= 00:07:27.022 04:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.022 04:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:27.022 04:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:27.022 04:03:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:27.022 04:03:28 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:27.022 04:03:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.022 00:07:27.022 real 0m3.003s 00:07:27.022 user 0m9.586s 00:07:27.022 sys 0m0.290s 00:07:27.022 04:03:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:27.022 04:03:28 -- common/autotest_common.sh@10 -- # set +x 00:07:27.022 ************************************ 00:07:27.022 END TEST accel_decomp_mcore 00:07:27.022 ************************************ 00:07:27.022 04:03:28 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:27.022 04:03:28 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:27.022 04:03:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.022 04:03:28 -- common/autotest_common.sh@10 -- # set +x 00:07:27.022 ************************************ 00:07:27.022 START TEST accel_decomp_full_mcore 00:07:27.022 ************************************ 00:07:27.022 04:03:28 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:27.022 04:03:28 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.022 04:03:28 -- accel/accel.sh@17 -- # local accel_module 00:07:27.022 04:03:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:27.022 04:03:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:27.022 04:03:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.022 04:03:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.022 04:03:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.022 04:03:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.022 04:03:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.022 04:03:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.022 04:03:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.022 04:03:28 -- accel/accel.sh@42 -- # jq -r . 00:07:27.022 [2024-11-26 04:03:28.598066] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:27.022 [2024-11-26 04:03:28.598294] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71403 ] 00:07:27.022 [2024-11-26 04:03:28.727250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.281 [2024-11-26 04:03:28.801002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.281 [2024-11-26 04:03:28.801134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.281 [2024-11-26 04:03:28.802168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.281 [2024-11-26 04:03:28.802182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.658 04:03:30 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:28.658 00:07:28.658 SPDK Configuration: 00:07:28.658 Core mask: 0xf 00:07:28.658 00:07:28.658 Accel Perf Configuration: 00:07:28.658 Workload Type: decompress 00:07:28.658 Transfer size: 111250 bytes 00:07:28.658 Vector count 1 00:07:28.658 Module: software 00:07:28.658 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:28.658 Queue depth: 32 00:07:28.658 Allocate depth: 32 00:07:28.658 # threads/core: 1 00:07:28.658 Run time: 1 seconds 00:07:28.658 Verify: Yes 00:07:28.658 00:07:28.658 Running for 1 seconds... 00:07:28.658 00:07:28.658 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:28.658 ------------------------------------------------------------------------------------ 00:07:28.658 0,0 5536/s 228 MiB/s 0 0 00:07:28.658 3,0 5344/s 220 MiB/s 0 0 00:07:28.658 2,0 5536/s 228 MiB/s 0 0 00:07:28.658 1,0 5376/s 222 MiB/s 0 0 00:07:28.658 ==================================================================================== 00:07:28.658 Total 21792/s 2312 MiB/s 0 0' 00:07:28.658 04:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.658 04:03:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:28.658 04:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.658 04:03:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:28.658 04:03:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.658 04:03:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.658 04:03:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.658 04:03:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.658 04:03:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.658 04:03:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.658 04:03:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.658 04:03:30 -- accel/accel.sh@42 -- # jq -r . 00:07:28.658 [2024-11-26 04:03:30.105318] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:28.658 [2024-11-26 04:03:30.105400] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71425 ] 00:07:28.658 [2024-11-26 04:03:30.234979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.658 [2024-11-26 04:03:30.303023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.658 [2024-11-26 04:03:30.303170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.658 [2024-11-26 04:03:30.303286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.658 [2024-11-26 04:03:30.303599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.658 04:03:30 -- accel/accel.sh@21 -- # val= 00:07:28.658 04:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.658 04:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.658 04:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.658 04:03:30 -- accel/accel.sh@21 -- # val= 00:07:28.658 04:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.658 04:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.658 04:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.658 04:03:30 -- accel/accel.sh@21 -- # val= 00:07:28.658 04:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.658 04:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.658 04:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.658 04:03:30 -- accel/accel.sh@21 -- # val=0xf 00:07:28.658 04:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.658 04:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.658 04:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.658 04:03:30 -- accel/accel.sh@21 -- # val= 00:07:28.658 04:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.658 04:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.658 04:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.658 04:03:30 -- accel/accel.sh@21 -- # val= 00:07:28.658 04:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.658 04:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.658 04:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.659 04:03:30 -- accel/accel.sh@21 -- # val=decompress 00:07:28.659 04:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.659 04:03:30 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:28.659 04:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.659 04:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.659 04:03:30 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:28.659 04:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.659 04:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.659 04:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.659 04:03:30 -- accel/accel.sh@21 -- # val= 00:07:28.659 04:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.659 04:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.659 04:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.659 04:03:30 -- accel/accel.sh@21 -- # val=software 00:07:28.659 04:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.659 04:03:30 -- accel/accel.sh@23 -- # accel_module=software 00:07:28.659 04:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.659 04:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.659 04:03:30 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:28.659 04:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.659 04:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.659 04:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.659 04:03:30 -- accel/accel.sh@21 -- # val=32 00:07:28.659 04:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.659 04:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.659 04:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.659 04:03:30 -- accel/accel.sh@21 -- # val=32 00:07:28.659 04:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.659 04:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.659 04:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.659 04:03:30 -- accel/accel.sh@21 -- # val=1 00:07:28.659 04:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.659 04:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.659 04:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.659 04:03:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:28.659 04:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.659 04:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.659 04:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.659 04:03:30 -- accel/accel.sh@21 -- # val=Yes 00:07:28.659 04:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.659 04:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.659 04:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.659 04:03:30 -- accel/accel.sh@21 -- # val= 00:07:28.659 04:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.659 04:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.659 04:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.659 04:03:30 -- accel/accel.sh@21 -- # val= 00:07:28.659 04:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.659 04:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.659 04:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:30.037 04:03:31 -- accel/accel.sh@21 -- # val= 00:07:30.037 04:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.037 04:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:30.037 04:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:30.037 04:03:31 -- accel/accel.sh@21 -- # val= 00:07:30.037 04:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.037 04:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:30.037 04:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:30.037 04:03:31 -- accel/accel.sh@21 -- # val= 00:07:30.037 04:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.037 04:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:30.037 04:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:30.037 04:03:31 -- accel/accel.sh@21 -- # val= 00:07:30.037 04:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.037 04:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:30.037 04:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:30.037 04:03:31 -- accel/accel.sh@21 -- # val= 00:07:30.037 04:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.037 04:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:30.037 04:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:30.037 04:03:31 -- accel/accel.sh@21 -- # val= 00:07:30.037 04:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.037 04:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:30.037 04:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:30.037 04:03:31 -- accel/accel.sh@21 -- # val= 00:07:30.037 04:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.037 04:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:30.037 04:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:30.037 04:03:31 -- accel/accel.sh@21 -- # val= 00:07:30.037 04:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.037 04:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:30.037 04:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:30.037 04:03:31 -- accel/accel.sh@21 -- # val= 00:07:30.037 04:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.037 04:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:30.037 04:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:30.037 04:03:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:30.037 04:03:31 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:30.037 04:03:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.037 00:07:30.037 real 0m3.006s 00:07:30.037 user 0m9.645s 00:07:30.037 sys 0m0.289s 00:07:30.037 04:03:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:30.037 ************************************ 00:07:30.037 END TEST accel_decomp_full_mcore 00:07:30.037 ************************************ 00:07:30.037 04:03:31 -- common/autotest_common.sh@10 -- # set +x 00:07:30.037 04:03:31 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:30.037 04:03:31 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:30.037 04:03:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.037 04:03:31 -- common/autotest_common.sh@10 -- # set +x 00:07:30.037 ************************************ 00:07:30.037 START TEST accel_decomp_mthread 00:07:30.037 ************************************ 00:07:30.037 04:03:31 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:30.037 04:03:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.037 04:03:31 -- accel/accel.sh@17 -- # local accel_module 00:07:30.037 04:03:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:30.037 04:03:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:30.037 04:03:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.037 04:03:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.037 04:03:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.037 04:03:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.037 04:03:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.037 04:03:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.037 04:03:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.037 04:03:31 -- accel/accel.sh@42 -- # jq -r . 00:07:30.037 [2024-11-26 04:03:31.663143] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:30.038 [2024-11-26 04:03:31.663225] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71463 ] 00:07:30.038 [2024-11-26 04:03:31.794676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.296 [2024-11-26 04:03:31.866867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.673 04:03:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:31.673 00:07:31.673 SPDK Configuration: 00:07:31.673 Core mask: 0x1 00:07:31.673 00:07:31.673 Accel Perf Configuration: 00:07:31.673 Workload Type: decompress 00:07:31.673 Transfer size: 4096 bytes 00:07:31.673 Vector count 1 00:07:31.673 Module: software 00:07:31.673 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:31.673 Queue depth: 32 00:07:31.673 Allocate depth: 32 00:07:31.673 # threads/core: 2 00:07:31.673 Run time: 1 seconds 00:07:31.673 Verify: Yes 00:07:31.673 00:07:31.673 Running for 1 seconds... 00:07:31.673 00:07:31.673 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:31.673 ------------------------------------------------------------------------------------ 00:07:31.673 0,1 43360/s 79 MiB/s 0 0 00:07:31.673 0,0 43200/s 79 MiB/s 0 0 00:07:31.673 ==================================================================================== 00:07:31.673 Total 86560/s 338 MiB/s 0 0' 00:07:31.673 04:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.673 04:03:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:31.673 04:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.673 04:03:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.673 04:03:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:31.673 04:03:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.673 04:03:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.673 04:03:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.673 04:03:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.673 04:03:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.673 04:03:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.673 04:03:33 -- accel/accel.sh@42 -- # jq -r . 00:07:31.673 [2024-11-26 04:03:33.149816] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:31.673 [2024-11-26 04:03:33.149909] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71482 ] 00:07:31.673 [2024-11-26 04:03:33.286753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.673 [2024-11-26 04:03:33.352160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.673 04:03:33 -- accel/accel.sh@21 -- # val= 00:07:31.673 04:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.673 04:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.673 04:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.673 04:03:33 -- accel/accel.sh@21 -- # val= 00:07:31.673 04:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.673 04:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.673 04:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.673 04:03:33 -- accel/accel.sh@21 -- # val= 00:07:31.673 04:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.673 04:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.673 04:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.673 04:03:33 -- accel/accel.sh@21 -- # val=0x1 00:07:31.673 04:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.674 04:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.674 04:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.674 04:03:33 -- accel/accel.sh@21 -- # val= 00:07:31.674 04:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.674 04:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.674 04:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.674 04:03:33 -- accel/accel.sh@21 -- # val= 00:07:31.674 04:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.674 04:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.674 04:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.674 04:03:33 -- accel/accel.sh@21 -- # val=decompress 00:07:31.674 04:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.674 04:03:33 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:31.674 04:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.674 04:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.674 04:03:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:31.674 04:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.674 04:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.674 04:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.932 04:03:33 -- accel/accel.sh@21 -- # val= 00:07:31.932 04:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.932 04:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.932 04:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.932 04:03:33 -- accel/accel.sh@21 -- # val=software 00:07:31.932 04:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.932 04:03:33 -- accel/accel.sh@23 -- # accel_module=software 00:07:31.932 04:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.932 04:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.932 04:03:33 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:31.932 04:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.932 04:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.932 04:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.932 04:03:33 -- accel/accel.sh@21 -- # val=32 00:07:31.932 04:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.932 04:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.932 04:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.932 04:03:33 -- accel/accel.sh@21 -- # val=32 00:07:31.932 04:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.932 04:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.932 04:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.932 04:03:33 -- accel/accel.sh@21 -- # val=2 00:07:31.932 04:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.932 04:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.932 04:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.932 04:03:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:31.932 04:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.932 04:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.932 04:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.932 04:03:33 -- accel/accel.sh@21 -- # val=Yes 00:07:31.932 04:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.932 04:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.932 04:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.932 04:03:33 -- accel/accel.sh@21 -- # val= 00:07:31.932 04:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.932 04:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.932 04:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.932 04:03:33 -- accel/accel.sh@21 -- # val= 00:07:31.932 04:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.932 04:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.932 04:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:32.870 04:03:34 -- accel/accel.sh@21 -- # val= 00:07:32.870 04:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.870 04:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:32.870 04:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:32.870 04:03:34 -- accel/accel.sh@21 -- # val= 00:07:32.870 04:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.870 04:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:32.870 04:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:32.870 04:03:34 -- accel/accel.sh@21 -- # val= 00:07:32.870 04:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.870 04:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:32.870 04:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:32.870 04:03:34 -- accel/accel.sh@21 -- # val= 00:07:32.870 04:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.870 04:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:32.870 04:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:32.870 04:03:34 -- accel/accel.sh@21 -- # val= 00:07:32.870 04:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.870 04:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:32.870 04:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:32.870 04:03:34 -- accel/accel.sh@21 -- # val= 00:07:32.870 04:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.870 04:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:32.870 04:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:32.870 04:03:34 -- accel/accel.sh@21 -- # val= 00:07:32.870 04:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.870 04:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:32.870 04:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:32.870 04:03:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:32.870 04:03:34 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:32.870 04:03:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.870 00:07:32.870 real 0m2.974s 00:07:32.870 user 0m2.511s 00:07:32.870 sys 0m0.262s 00:07:32.870 04:03:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.870 ************************************ 00:07:32.870 END TEST accel_decomp_mthread 00:07:32.870 ************************************ 00:07:32.870 04:03:34 -- common/autotest_common.sh@10 -- # set +x 00:07:33.130 04:03:34 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.130 04:03:34 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:33.130 04:03:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.130 04:03:34 -- common/autotest_common.sh@10 -- # set +x 00:07:33.130 ************************************ 00:07:33.130 START TEST accel_deomp_full_mthread 00:07:33.130 ************************************ 00:07:33.130 04:03:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.130 04:03:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.130 04:03:34 -- accel/accel.sh@17 -- # local accel_module 00:07:33.130 04:03:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.130 04:03:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.130 04:03:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.130 04:03:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.130 04:03:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.130 04:03:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.130 04:03:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.130 04:03:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.130 04:03:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.130 04:03:34 -- accel/accel.sh@42 -- # jq -r . 00:07:33.130 [2024-11-26 04:03:34.695946] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:33.130 [2024-11-26 04:03:34.696039] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71517 ] 00:07:33.130 [2024-11-26 04:03:34.828571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.388 [2024-11-26 04:03:34.894947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.766 04:03:36 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:34.766 00:07:34.766 SPDK Configuration: 00:07:34.766 Core mask: 0x1 00:07:34.766 00:07:34.766 Accel Perf Configuration: 00:07:34.766 Workload Type: decompress 00:07:34.766 Transfer size: 111250 bytes 00:07:34.766 Vector count 1 00:07:34.766 Module: software 00:07:34.766 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:34.766 Queue depth: 32 00:07:34.766 Allocate depth: 32 00:07:34.766 # threads/core: 2 00:07:34.766 Run time: 1 seconds 00:07:34.766 Verify: Yes 00:07:34.766 00:07:34.766 Running for 1 seconds... 00:07:34.766 00:07:34.766 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:34.766 ------------------------------------------------------------------------------------ 00:07:34.766 0,1 2944/s 121 MiB/s 0 0 00:07:34.766 0,0 2880/s 118 MiB/s 0 0 00:07:34.766 ==================================================================================== 00:07:34.766 Total 5824/s 617 MiB/s 0 0' 00:07:34.766 04:03:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:34.766 04:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:34.766 04:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:34.766 04:03:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:34.766 04:03:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.766 04:03:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.766 04:03:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.766 04:03:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.766 04:03:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.766 04:03:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.766 04:03:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.766 04:03:36 -- accel/accel.sh@42 -- # jq -r . 00:07:34.766 [2024-11-26 04:03:36.199342] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:34.766 [2024-11-26 04:03:36.199442] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71536 ] 00:07:34.766 [2024-11-26 04:03:36.338313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.766 [2024-11-26 04:03:36.408984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.767 04:03:36 -- accel/accel.sh@21 -- # val= 00:07:34.767 04:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:34.767 04:03:36 -- accel/accel.sh@21 -- # val= 00:07:34.767 04:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:34.767 04:03:36 -- accel/accel.sh@21 -- # val= 00:07:34.767 04:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:34.767 04:03:36 -- accel/accel.sh@21 -- # val=0x1 00:07:34.767 04:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:34.767 04:03:36 -- accel/accel.sh@21 -- # val= 00:07:34.767 04:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:34.767 04:03:36 -- accel/accel.sh@21 -- # val= 00:07:34.767 04:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:34.767 04:03:36 -- accel/accel.sh@21 -- # val=decompress 00:07:34.767 04:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.767 04:03:36 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:34.767 04:03:36 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:34.767 04:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:34.767 04:03:36 -- accel/accel.sh@21 -- # val= 00:07:34.767 04:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:34.767 04:03:36 -- accel/accel.sh@21 -- # val=software 00:07:34.767 04:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.767 04:03:36 -- accel/accel.sh@23 -- # accel_module=software 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:34.767 04:03:36 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:34.767 04:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:34.767 04:03:36 -- accel/accel.sh@21 -- # val=32 00:07:34.767 04:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:34.767 04:03:36 -- accel/accel.sh@21 -- # val=32 00:07:34.767 04:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:34.767 04:03:36 -- accel/accel.sh@21 -- # val=2 00:07:34.767 04:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:34.767 04:03:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:34.767 04:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:34.767 04:03:36 -- accel/accel.sh@21 -- # val=Yes 00:07:34.767 04:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:34.767 04:03:36 -- accel/accel.sh@21 -- # val= 00:07:34.767 04:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:34.767 04:03:36 -- accel/accel.sh@21 -- # val= 00:07:34.767 04:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:34.767 04:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:36.143 04:03:37 -- accel/accel.sh@21 -- # val= 00:07:36.143 04:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.143 04:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:36.143 04:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:36.143 04:03:37 -- accel/accel.sh@21 -- # val= 00:07:36.143 04:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.143 04:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:36.143 04:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:36.143 04:03:37 -- accel/accel.sh@21 -- # val= 00:07:36.143 04:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.143 04:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:36.143 04:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:36.143 04:03:37 -- accel/accel.sh@21 -- # val= 00:07:36.143 04:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.143 04:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:36.143 04:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:36.143 04:03:37 -- accel/accel.sh@21 -- # val= 00:07:36.143 04:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.143 04:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:36.143 04:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:36.143 04:03:37 -- accel/accel.sh@21 -- # val= 00:07:36.143 04:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.143 04:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:36.143 04:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:36.143 04:03:37 -- accel/accel.sh@21 -- # val= 00:07:36.143 04:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.143 04:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:36.143 04:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:36.143 04:03:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:36.143 ************************************ 00:07:36.143 END TEST accel_deomp_full_mthread 00:07:36.143 ************************************ 00:07:36.143 04:03:37 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:36.143 04:03:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.143 00:07:36.143 real 0m3.016s 00:07:36.143 user 0m2.540s 00:07:36.143 sys 0m0.274s 00:07:36.143 04:03:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:36.143 04:03:37 -- common/autotest_common.sh@10 -- # set +x 00:07:36.143 04:03:37 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:36.143 04:03:37 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:36.143 04:03:37 -- accel/accel.sh@129 -- # build_accel_config 00:07:36.143 04:03:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.143 04:03:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:36.143 04:03:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.143 04:03:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.143 04:03:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.143 04:03:37 -- common/autotest_common.sh@10 -- # set +x 00:07:36.143 04:03:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.143 04:03:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.143 04:03:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.143 04:03:37 -- accel/accel.sh@42 -- # jq -r . 00:07:36.143 ************************************ 00:07:36.143 START TEST accel_dif_functional_tests 00:07:36.143 ************************************ 00:07:36.143 04:03:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:36.143 [2024-11-26 04:03:37.792666] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:36.143 [2024-11-26 04:03:37.792798] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71572 ] 00:07:36.403 [2024-11-26 04:03:37.931345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.403 [2024-11-26 04:03:38.006730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.403 [2024-11-26 04:03:38.006862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.403 [2024-11-26 04:03:38.006864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.403 00:07:36.403 00:07:36.403 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.403 http://cunit.sourceforge.net/ 00:07:36.403 00:07:36.403 00:07:36.403 Suite: accel_dif 00:07:36.403 Test: verify: DIF generated, GUARD check ...passed 00:07:36.403 Test: verify: DIF generated, APPTAG check ...passed 00:07:36.403 Test: verify: DIF generated, REFTAG check ...passed 00:07:36.403 Test: verify: DIF not generated, GUARD check ...passed 00:07:36.403 Test: verify: DIF not generated, APPTAG check ...[2024-11-26 04:03:38.124769] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:36.403 [2024-11-26 04:03:38.124833] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:36.403 passed 00:07:36.403 Test: verify: DIF not generated, REFTAG check ...[2024-11-26 04:03:38.124906] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:36.403 [2024-11-26 04:03:38.125072] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:36.403 passed 00:07:36.403 Test: verify: APPTAG correct, APPTAG check ...passed[2024-11-26 04:03:38.125111] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:36.403 [2024-11-26 04:03:38.125186] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:36.403 00:07:36.403 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:36.403 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-11-26 04:03:38.125341] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:36.403 passed 00:07:36.403 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:36.403 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:36.403 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:36.403 Test: generate copy: DIF generated, GUARD check ...[2024-11-26 04:03:38.125686] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:36.403 passed 00:07:36.403 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:36.403 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:36.403 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:36.403 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:36.403 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:36.403 Test: generate copy: iovecs-len validate ...passed 00:07:36.403 Test: generate copy: buffer alignment validate ...[2024-11-26 04:03:38.126354] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:36.403 passed 00:07:36.403 00:07:36.403 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.403 suites 1 1 n/a 0 0 00:07:36.403 tests 20 20 20 0 0 00:07:36.403 asserts 204 204 204 0 n/a 00:07:36.403 00:07:36.403 Elapsed time = 0.005 seconds 00:07:36.662 00:07:36.662 real 0m0.628s 00:07:36.662 user 0m0.928s 00:07:36.662 sys 0m0.186s 00:07:36.662 04:03:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:36.662 ************************************ 00:07:36.662 END TEST accel_dif_functional_tests 00:07:36.662 ************************************ 00:07:36.662 04:03:38 -- common/autotest_common.sh@10 -- # set +x 00:07:36.662 00:07:36.662 real 1m4.367s 00:07:36.662 user 1m8.108s 00:07:36.662 sys 0m7.214s 00:07:36.662 ************************************ 00:07:36.662 END TEST accel 00:07:36.662 ************************************ 00:07:36.662 04:03:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:36.662 04:03:38 -- common/autotest_common.sh@10 -- # set +x 00:07:36.920 04:03:38 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:36.920 04:03:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:36.920 04:03:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.920 04:03:38 -- common/autotest_common.sh@10 -- # set +x 00:07:36.920 ************************************ 00:07:36.920 START TEST accel_rpc 00:07:36.920 ************************************ 00:07:36.920 04:03:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:36.920 * Looking for test storage... 00:07:36.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:36.920 04:03:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:36.920 04:03:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:36.920 04:03:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:36.920 04:03:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:36.920 04:03:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:36.920 04:03:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:36.920 04:03:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:36.920 04:03:38 -- scripts/common.sh@335 -- # IFS=.-: 00:07:36.920 04:03:38 -- scripts/common.sh@335 -- # read -ra ver1 00:07:36.920 04:03:38 -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.920 04:03:38 -- scripts/common.sh@336 -- # read -ra ver2 00:07:36.920 04:03:38 -- scripts/common.sh@337 -- # local 'op=<' 00:07:36.920 04:03:38 -- scripts/common.sh@339 -- # ver1_l=2 00:07:36.920 04:03:38 -- scripts/common.sh@340 -- # ver2_l=1 00:07:36.920 04:03:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:36.920 04:03:38 -- scripts/common.sh@343 -- # case "$op" in 00:07:36.920 04:03:38 -- scripts/common.sh@344 -- # : 1 00:07:36.920 04:03:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:36.920 04:03:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.920 04:03:38 -- scripts/common.sh@364 -- # decimal 1 00:07:36.920 04:03:38 -- scripts/common.sh@352 -- # local d=1 00:07:36.920 04:03:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.920 04:03:38 -- scripts/common.sh@354 -- # echo 1 00:07:36.920 04:03:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:36.920 04:03:38 -- scripts/common.sh@365 -- # decimal 2 00:07:36.920 04:03:38 -- scripts/common.sh@352 -- # local d=2 00:07:36.920 04:03:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.920 04:03:38 -- scripts/common.sh@354 -- # echo 2 00:07:36.921 04:03:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:36.921 04:03:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:36.921 04:03:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:36.921 04:03:38 -- scripts/common.sh@367 -- # return 0 00:07:36.921 04:03:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.921 04:03:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:36.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.921 --rc genhtml_branch_coverage=1 00:07:36.921 --rc genhtml_function_coverage=1 00:07:36.921 --rc genhtml_legend=1 00:07:36.921 --rc geninfo_all_blocks=1 00:07:36.921 --rc geninfo_unexecuted_blocks=1 00:07:36.921 00:07:36.921 ' 00:07:36.921 04:03:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:36.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.921 --rc genhtml_branch_coverage=1 00:07:36.921 --rc genhtml_function_coverage=1 00:07:36.921 --rc genhtml_legend=1 00:07:36.921 --rc geninfo_all_blocks=1 00:07:36.921 --rc geninfo_unexecuted_blocks=1 00:07:36.921 00:07:36.921 ' 00:07:36.921 04:03:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:36.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.921 --rc genhtml_branch_coverage=1 00:07:36.921 --rc genhtml_function_coverage=1 00:07:36.921 --rc genhtml_legend=1 00:07:36.921 --rc geninfo_all_blocks=1 00:07:36.921 --rc geninfo_unexecuted_blocks=1 00:07:36.921 00:07:36.921 ' 00:07:36.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.921 04:03:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:36.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.921 --rc genhtml_branch_coverage=1 00:07:36.921 --rc genhtml_function_coverage=1 00:07:36.921 --rc genhtml_legend=1 00:07:36.921 --rc geninfo_all_blocks=1 00:07:36.921 --rc geninfo_unexecuted_blocks=1 00:07:36.921 00:07:36.921 ' 00:07:36.921 04:03:38 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:36.921 04:03:38 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71649 00:07:36.921 04:03:38 -- accel/accel_rpc.sh@15 -- # waitforlisten 71649 00:07:36.921 04:03:38 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:36.921 04:03:38 -- common/autotest_common.sh@829 -- # '[' -z 71649 ']' 00:07:36.921 04:03:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.921 04:03:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:36.921 04:03:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.921 04:03:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:36.921 04:03:38 -- common/autotest_common.sh@10 -- # set +x 00:07:37.179 [2024-11-26 04:03:38.729498] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:37.179 [2024-11-26 04:03:38.729766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71649 ] 00:07:37.179 [2024-11-26 04:03:38.859596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.180 [2024-11-26 04:03:38.931046] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:37.180 [2024-11-26 04:03:38.931454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.115 04:03:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:38.115 04:03:39 -- common/autotest_common.sh@862 -- # return 0 00:07:38.115 04:03:39 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:38.115 04:03:39 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:38.115 04:03:39 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:38.115 04:03:39 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:38.116 04:03:39 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:38.116 04:03:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:38.116 04:03:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.116 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:07:38.116 ************************************ 00:07:38.116 START TEST accel_assign_opcode 00:07:38.116 ************************************ 00:07:38.116 04:03:39 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:38.116 04:03:39 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:38.116 04:03:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.116 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:07:38.116 [2024-11-26 04:03:39.607954] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:38.116 04:03:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.116 04:03:39 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:38.116 04:03:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.116 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:07:38.116 [2024-11-26 04:03:39.615951] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:38.116 04:03:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.116 04:03:39 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:38.116 04:03:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.116 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:07:38.375 04:03:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.375 04:03:39 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:38.375 04:03:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.375 04:03:39 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:38.375 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:07:38.375 04:03:39 -- accel/accel_rpc.sh@42 -- # grep software 00:07:38.375 04:03:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.375 software 00:07:38.375 ************************************ 00:07:38.375 END TEST accel_assign_opcode 00:07:38.375 ************************************ 00:07:38.375 00:07:38.375 real 0m0.355s 00:07:38.375 user 0m0.059s 00:07:38.375 sys 0m0.006s 00:07:38.375 04:03:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.375 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:07:38.375 04:03:39 -- accel/accel_rpc.sh@55 -- # killprocess 71649 00:07:38.375 04:03:39 -- common/autotest_common.sh@936 -- # '[' -z 71649 ']' 00:07:38.375 04:03:39 -- common/autotest_common.sh@940 -- # kill -0 71649 00:07:38.375 04:03:40 -- common/autotest_common.sh@941 -- # uname 00:07:38.375 04:03:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:38.375 04:03:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71649 00:07:38.375 04:03:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:38.375 04:03:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:38.375 killing process with pid 71649 00:07:38.375 04:03:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71649' 00:07:38.375 04:03:40 -- common/autotest_common.sh@955 -- # kill 71649 00:07:38.375 04:03:40 -- common/autotest_common.sh@960 -- # wait 71649 00:07:38.944 ************************************ 00:07:38.944 END TEST accel_rpc 00:07:38.944 ************************************ 00:07:38.944 00:07:38.944 real 0m2.064s 00:07:38.944 user 0m1.985s 00:07:38.944 sys 0m0.527s 00:07:38.944 04:03:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.944 04:03:40 -- common/autotest_common.sh@10 -- # set +x 00:07:38.944 04:03:40 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:38.944 04:03:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:38.944 04:03:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.944 04:03:40 -- common/autotest_common.sh@10 -- # set +x 00:07:38.944 ************************************ 00:07:38.944 START TEST app_cmdline 00:07:38.944 ************************************ 00:07:38.944 04:03:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:38.944 * Looking for test storage... 00:07:38.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:38.944 04:03:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:38.944 04:03:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:38.944 04:03:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:39.203 04:03:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:39.203 04:03:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:39.203 04:03:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:39.203 04:03:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:39.203 04:03:40 -- scripts/common.sh@335 -- # IFS=.-: 00:07:39.203 04:03:40 -- scripts/common.sh@335 -- # read -ra ver1 00:07:39.203 04:03:40 -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.203 04:03:40 -- scripts/common.sh@336 -- # read -ra ver2 00:07:39.203 04:03:40 -- scripts/common.sh@337 -- # local 'op=<' 00:07:39.203 04:03:40 -- scripts/common.sh@339 -- # ver1_l=2 00:07:39.203 04:03:40 -- scripts/common.sh@340 -- # ver2_l=1 00:07:39.203 04:03:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:39.203 04:03:40 -- scripts/common.sh@343 -- # case "$op" in 00:07:39.203 04:03:40 -- scripts/common.sh@344 -- # : 1 00:07:39.203 04:03:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:39.203 04:03:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.203 04:03:40 -- scripts/common.sh@364 -- # decimal 1 00:07:39.203 04:03:40 -- scripts/common.sh@352 -- # local d=1 00:07:39.203 04:03:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.203 04:03:40 -- scripts/common.sh@354 -- # echo 1 00:07:39.203 04:03:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:39.203 04:03:40 -- scripts/common.sh@365 -- # decimal 2 00:07:39.203 04:03:40 -- scripts/common.sh@352 -- # local d=2 00:07:39.203 04:03:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.203 04:03:40 -- scripts/common.sh@354 -- # echo 2 00:07:39.203 04:03:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:39.203 04:03:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:39.203 04:03:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:39.203 04:03:40 -- scripts/common.sh@367 -- # return 0 00:07:39.204 04:03:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.204 04:03:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:39.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.204 --rc genhtml_branch_coverage=1 00:07:39.204 --rc genhtml_function_coverage=1 00:07:39.204 --rc genhtml_legend=1 00:07:39.204 --rc geninfo_all_blocks=1 00:07:39.204 --rc geninfo_unexecuted_blocks=1 00:07:39.204 00:07:39.204 ' 00:07:39.204 04:03:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:39.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.204 --rc genhtml_branch_coverage=1 00:07:39.204 --rc genhtml_function_coverage=1 00:07:39.204 --rc genhtml_legend=1 00:07:39.204 --rc geninfo_all_blocks=1 00:07:39.204 --rc geninfo_unexecuted_blocks=1 00:07:39.204 00:07:39.204 ' 00:07:39.204 04:03:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:39.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.204 --rc genhtml_branch_coverage=1 00:07:39.204 --rc genhtml_function_coverage=1 00:07:39.204 --rc genhtml_legend=1 00:07:39.204 --rc geninfo_all_blocks=1 00:07:39.204 --rc geninfo_unexecuted_blocks=1 00:07:39.204 00:07:39.204 ' 00:07:39.204 04:03:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:39.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.204 --rc genhtml_branch_coverage=1 00:07:39.204 --rc genhtml_function_coverage=1 00:07:39.204 --rc genhtml_legend=1 00:07:39.204 --rc geninfo_all_blocks=1 00:07:39.204 --rc geninfo_unexecuted_blocks=1 00:07:39.204 00:07:39.204 ' 00:07:39.204 04:03:40 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:39.204 04:03:40 -- app/cmdline.sh@17 -- # spdk_tgt_pid=71767 00:07:39.204 04:03:40 -- app/cmdline.sh@18 -- # waitforlisten 71767 00:07:39.204 04:03:40 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:39.204 04:03:40 -- common/autotest_common.sh@829 -- # '[' -z 71767 ']' 00:07:39.204 04:03:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.204 04:03:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.204 04:03:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.204 04:03:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.204 04:03:40 -- common/autotest_common.sh@10 -- # set +x 00:07:39.204 [2024-11-26 04:03:40.830179] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:39.204 [2024-11-26 04:03:40.830771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71767 ] 00:07:39.464 [2024-11-26 04:03:40.971448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.464 [2024-11-26 04:03:41.044037] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:39.464 [2024-11-26 04:03:41.044436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.401 04:03:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.401 04:03:41 -- common/autotest_common.sh@862 -- # return 0 00:07:40.401 04:03:41 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:40.401 { 00:07:40.401 "fields": { 00:07:40.401 "commit": "c13c99a5e", 00:07:40.401 "major": 24, 00:07:40.401 "minor": 1, 00:07:40.401 "patch": 1, 00:07:40.401 "suffix": "-pre" 00:07:40.401 }, 00:07:40.401 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e" 00:07:40.401 } 00:07:40.401 04:03:42 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:40.401 04:03:42 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:40.401 04:03:42 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:40.401 04:03:42 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:40.401 04:03:42 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:40.401 04:03:42 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:40.401 04:03:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.401 04:03:42 -- common/autotest_common.sh@10 -- # set +x 00:07:40.401 04:03:42 -- app/cmdline.sh@26 -- # sort 00:07:40.401 04:03:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.660 04:03:42 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:40.660 04:03:42 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:40.660 04:03:42 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:40.660 04:03:42 -- common/autotest_common.sh@650 -- # local es=0 00:07:40.660 04:03:42 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:40.660 04:03:42 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:40.660 04:03:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.660 04:03:42 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:40.660 04:03:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.660 04:03:42 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:40.660 04:03:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.660 04:03:42 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:40.660 04:03:42 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:40.660 04:03:42 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:40.660 2024/11/26 04:03:42 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:40.660 request: 00:07:40.660 { 00:07:40.660 "method": "env_dpdk_get_mem_stats", 00:07:40.660 "params": {} 00:07:40.660 } 00:07:40.660 Got JSON-RPC error response 00:07:40.660 GoRPCClient: error on JSON-RPC call 00:07:40.660 04:03:42 -- common/autotest_common.sh@653 -- # es=1 00:07:40.660 04:03:42 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.660 04:03:42 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:40.660 04:03:42 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.660 04:03:42 -- app/cmdline.sh@1 -- # killprocess 71767 00:07:40.660 04:03:42 -- common/autotest_common.sh@936 -- # '[' -z 71767 ']' 00:07:40.660 04:03:42 -- common/autotest_common.sh@940 -- # kill -0 71767 00:07:40.660 04:03:42 -- common/autotest_common.sh@941 -- # uname 00:07:40.660 04:03:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:40.660 04:03:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71767 00:07:40.660 04:03:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:40.660 04:03:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:40.660 killing process with pid 71767 00:07:40.660 04:03:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71767' 00:07:40.660 04:03:42 -- common/autotest_common.sh@955 -- # kill 71767 00:07:40.660 04:03:42 -- common/autotest_common.sh@960 -- # wait 71767 00:07:41.227 00:07:41.227 real 0m2.323s 00:07:41.227 user 0m2.750s 00:07:41.227 sys 0m0.590s 00:07:41.227 04:03:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:41.227 04:03:42 -- common/autotest_common.sh@10 -- # set +x 00:07:41.227 ************************************ 00:07:41.227 END TEST app_cmdline 00:07:41.227 ************************************ 00:07:41.227 04:03:42 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:41.227 04:03:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:41.227 04:03:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.227 04:03:42 -- common/autotest_common.sh@10 -- # set +x 00:07:41.227 ************************************ 00:07:41.227 START TEST version 00:07:41.227 ************************************ 00:07:41.227 04:03:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:41.485 * Looking for test storage... 00:07:41.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:41.485 04:03:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:41.485 04:03:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:41.485 04:03:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:41.485 04:03:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:41.485 04:03:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:41.485 04:03:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:41.485 04:03:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:41.485 04:03:43 -- scripts/common.sh@335 -- # IFS=.-: 00:07:41.485 04:03:43 -- scripts/common.sh@335 -- # read -ra ver1 00:07:41.485 04:03:43 -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.485 04:03:43 -- scripts/common.sh@336 -- # read -ra ver2 00:07:41.485 04:03:43 -- scripts/common.sh@337 -- # local 'op=<' 00:07:41.485 04:03:43 -- scripts/common.sh@339 -- # ver1_l=2 00:07:41.485 04:03:43 -- scripts/common.sh@340 -- # ver2_l=1 00:07:41.485 04:03:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:41.485 04:03:43 -- scripts/common.sh@343 -- # case "$op" in 00:07:41.485 04:03:43 -- scripts/common.sh@344 -- # : 1 00:07:41.485 04:03:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:41.485 04:03:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.485 04:03:43 -- scripts/common.sh@364 -- # decimal 1 00:07:41.485 04:03:43 -- scripts/common.sh@352 -- # local d=1 00:07:41.485 04:03:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.485 04:03:43 -- scripts/common.sh@354 -- # echo 1 00:07:41.485 04:03:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:41.486 04:03:43 -- scripts/common.sh@365 -- # decimal 2 00:07:41.486 04:03:43 -- scripts/common.sh@352 -- # local d=2 00:07:41.486 04:03:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.486 04:03:43 -- scripts/common.sh@354 -- # echo 2 00:07:41.486 04:03:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:41.486 04:03:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:41.486 04:03:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:41.486 04:03:43 -- scripts/common.sh@367 -- # return 0 00:07:41.486 04:03:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.486 04:03:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:41.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.486 --rc genhtml_branch_coverage=1 00:07:41.486 --rc genhtml_function_coverage=1 00:07:41.486 --rc genhtml_legend=1 00:07:41.486 --rc geninfo_all_blocks=1 00:07:41.486 --rc geninfo_unexecuted_blocks=1 00:07:41.486 00:07:41.486 ' 00:07:41.486 04:03:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:41.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.486 --rc genhtml_branch_coverage=1 00:07:41.486 --rc genhtml_function_coverage=1 00:07:41.486 --rc genhtml_legend=1 00:07:41.486 --rc geninfo_all_blocks=1 00:07:41.486 --rc geninfo_unexecuted_blocks=1 00:07:41.486 00:07:41.486 ' 00:07:41.486 04:03:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:41.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.486 --rc genhtml_branch_coverage=1 00:07:41.486 --rc genhtml_function_coverage=1 00:07:41.486 --rc genhtml_legend=1 00:07:41.486 --rc geninfo_all_blocks=1 00:07:41.486 --rc geninfo_unexecuted_blocks=1 00:07:41.486 00:07:41.486 ' 00:07:41.486 04:03:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:41.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.486 --rc genhtml_branch_coverage=1 00:07:41.486 --rc genhtml_function_coverage=1 00:07:41.486 --rc genhtml_legend=1 00:07:41.486 --rc geninfo_all_blocks=1 00:07:41.486 --rc geninfo_unexecuted_blocks=1 00:07:41.486 00:07:41.486 ' 00:07:41.486 04:03:43 -- app/version.sh@17 -- # get_header_version major 00:07:41.486 04:03:43 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:41.486 04:03:43 -- app/version.sh@14 -- # cut -f2 00:07:41.486 04:03:43 -- app/version.sh@14 -- # tr -d '"' 00:07:41.486 04:03:43 -- app/version.sh@17 -- # major=24 00:07:41.486 04:03:43 -- app/version.sh@18 -- # get_header_version minor 00:07:41.486 04:03:43 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:41.486 04:03:43 -- app/version.sh@14 -- # tr -d '"' 00:07:41.486 04:03:43 -- app/version.sh@14 -- # cut -f2 00:07:41.486 04:03:43 -- app/version.sh@18 -- # minor=1 00:07:41.486 04:03:43 -- app/version.sh@19 -- # get_header_version patch 00:07:41.486 04:03:43 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:41.486 04:03:43 -- app/version.sh@14 -- # cut -f2 00:07:41.486 04:03:43 -- app/version.sh@14 -- # tr -d '"' 00:07:41.486 04:03:43 -- app/version.sh@19 -- # patch=1 00:07:41.486 04:03:43 -- app/version.sh@20 -- # get_header_version suffix 00:07:41.486 04:03:43 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:41.486 04:03:43 -- app/version.sh@14 -- # cut -f2 00:07:41.486 04:03:43 -- app/version.sh@14 -- # tr -d '"' 00:07:41.486 04:03:43 -- app/version.sh@20 -- # suffix=-pre 00:07:41.486 04:03:43 -- app/version.sh@22 -- # version=24.1 00:07:41.486 04:03:43 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:41.486 04:03:43 -- app/version.sh@25 -- # version=24.1.1 00:07:41.486 04:03:43 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:41.486 04:03:43 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:41.486 04:03:43 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:41.486 04:03:43 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:41.486 04:03:43 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:41.486 00:07:41.486 real 0m0.236s 00:07:41.486 user 0m0.153s 00:07:41.486 sys 0m0.120s 00:07:41.486 04:03:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:41.486 04:03:43 -- common/autotest_common.sh@10 -- # set +x 00:07:41.486 ************************************ 00:07:41.486 END TEST version 00:07:41.486 ************************************ 00:07:41.486 04:03:43 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:41.486 04:03:43 -- spdk/autotest.sh@191 -- # uname -s 00:07:41.486 04:03:43 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:41.486 04:03:43 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:41.486 04:03:43 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:41.486 04:03:43 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:41.486 04:03:43 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:41.486 04:03:43 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:41.486 04:03:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:41.486 04:03:43 -- common/autotest_common.sh@10 -- # set +x 00:07:41.745 04:03:43 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:41.745 04:03:43 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:41.745 04:03:43 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:07:41.745 04:03:43 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:07:41.745 04:03:43 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:07:41.745 04:03:43 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:07:41.745 04:03:43 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:41.745 04:03:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:41.745 04:03:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.745 04:03:43 -- common/autotest_common.sh@10 -- # set +x 00:07:41.745 ************************************ 00:07:41.745 START TEST nvmf_tcp 00:07:41.745 ************************************ 00:07:41.745 04:03:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:41.745 * Looking for test storage... 00:07:41.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:41.745 04:03:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:41.745 04:03:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:41.745 04:03:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:41.745 04:03:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:41.745 04:03:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:41.745 04:03:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:41.745 04:03:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:41.745 04:03:43 -- scripts/common.sh@335 -- # IFS=.-: 00:07:41.745 04:03:43 -- scripts/common.sh@335 -- # read -ra ver1 00:07:41.745 04:03:43 -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.745 04:03:43 -- scripts/common.sh@336 -- # read -ra ver2 00:07:41.745 04:03:43 -- scripts/common.sh@337 -- # local 'op=<' 00:07:41.745 04:03:43 -- scripts/common.sh@339 -- # ver1_l=2 00:07:41.745 04:03:43 -- scripts/common.sh@340 -- # ver2_l=1 00:07:41.745 04:03:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:41.745 04:03:43 -- scripts/common.sh@343 -- # case "$op" in 00:07:41.745 04:03:43 -- scripts/common.sh@344 -- # : 1 00:07:41.745 04:03:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:41.745 04:03:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.745 04:03:43 -- scripts/common.sh@364 -- # decimal 1 00:07:41.745 04:03:43 -- scripts/common.sh@352 -- # local d=1 00:07:41.745 04:03:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.745 04:03:43 -- scripts/common.sh@354 -- # echo 1 00:07:41.745 04:03:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:41.745 04:03:43 -- scripts/common.sh@365 -- # decimal 2 00:07:41.745 04:03:43 -- scripts/common.sh@352 -- # local d=2 00:07:41.745 04:03:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.745 04:03:43 -- scripts/common.sh@354 -- # echo 2 00:07:41.745 04:03:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:41.745 04:03:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:41.745 04:03:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:41.745 04:03:43 -- scripts/common.sh@367 -- # return 0 00:07:41.745 04:03:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.745 04:03:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:41.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.745 --rc genhtml_branch_coverage=1 00:07:41.745 --rc genhtml_function_coverage=1 00:07:41.745 --rc genhtml_legend=1 00:07:41.745 --rc geninfo_all_blocks=1 00:07:41.745 --rc geninfo_unexecuted_blocks=1 00:07:41.745 00:07:41.745 ' 00:07:41.745 04:03:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:41.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.745 --rc genhtml_branch_coverage=1 00:07:41.745 --rc genhtml_function_coverage=1 00:07:41.745 --rc genhtml_legend=1 00:07:41.745 --rc geninfo_all_blocks=1 00:07:41.745 --rc geninfo_unexecuted_blocks=1 00:07:41.745 00:07:41.745 ' 00:07:41.745 04:03:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:41.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.745 --rc genhtml_branch_coverage=1 00:07:41.745 --rc genhtml_function_coverage=1 00:07:41.745 --rc genhtml_legend=1 00:07:41.745 --rc geninfo_all_blocks=1 00:07:41.745 --rc geninfo_unexecuted_blocks=1 00:07:41.745 00:07:41.745 ' 00:07:41.745 04:03:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:41.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.745 --rc genhtml_branch_coverage=1 00:07:41.745 --rc genhtml_function_coverage=1 00:07:41.745 --rc genhtml_legend=1 00:07:41.745 --rc geninfo_all_blocks=1 00:07:41.745 --rc geninfo_unexecuted_blocks=1 00:07:41.745 00:07:41.745 ' 00:07:41.745 04:03:43 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:41.745 04:03:43 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:41.745 04:03:43 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:41.745 04:03:43 -- nvmf/common.sh@7 -- # uname -s 00:07:41.745 04:03:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.745 04:03:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.745 04:03:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.745 04:03:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.745 04:03:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.745 04:03:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.745 04:03:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.745 04:03:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.745 04:03:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.745 04:03:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.745 04:03:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:07:41.745 04:03:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:07:41.745 04:03:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.745 04:03:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.745 04:03:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:41.745 04:03:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:42.005 04:03:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.005 04:03:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.005 04:03:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.005 04:03:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.005 04:03:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.005 04:03:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.005 04:03:43 -- paths/export.sh@5 -- # export PATH 00:07:42.005 04:03:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.005 04:03:43 -- nvmf/common.sh@46 -- # : 0 00:07:42.005 04:03:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:42.005 04:03:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:42.005 04:03:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:42.005 04:03:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.005 04:03:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.005 04:03:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:42.005 04:03:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:42.005 04:03:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:42.005 04:03:43 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:42.005 04:03:43 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:42.005 04:03:43 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:42.005 04:03:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:42.005 04:03:43 -- common/autotest_common.sh@10 -- # set +x 00:07:42.005 04:03:43 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:42.005 04:03:43 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:42.005 04:03:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:42.005 04:03:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.005 04:03:43 -- common/autotest_common.sh@10 -- # set +x 00:07:42.005 ************************************ 00:07:42.005 START TEST nvmf_example 00:07:42.005 ************************************ 00:07:42.005 04:03:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:42.005 * Looking for test storage... 00:07:42.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:42.005 04:03:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:42.005 04:03:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:42.005 04:03:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:42.005 04:03:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:42.005 04:03:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:42.005 04:03:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:42.005 04:03:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:42.005 04:03:43 -- scripts/common.sh@335 -- # IFS=.-: 00:07:42.005 04:03:43 -- scripts/common.sh@335 -- # read -ra ver1 00:07:42.005 04:03:43 -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.005 04:03:43 -- scripts/common.sh@336 -- # read -ra ver2 00:07:42.005 04:03:43 -- scripts/common.sh@337 -- # local 'op=<' 00:07:42.005 04:03:43 -- scripts/common.sh@339 -- # ver1_l=2 00:07:42.005 04:03:43 -- scripts/common.sh@340 -- # ver2_l=1 00:07:42.005 04:03:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:42.005 04:03:43 -- scripts/common.sh@343 -- # case "$op" in 00:07:42.005 04:03:43 -- scripts/common.sh@344 -- # : 1 00:07:42.005 04:03:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:42.005 04:03:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.005 04:03:43 -- scripts/common.sh@364 -- # decimal 1 00:07:42.005 04:03:43 -- scripts/common.sh@352 -- # local d=1 00:07:42.005 04:03:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.005 04:03:43 -- scripts/common.sh@354 -- # echo 1 00:07:42.005 04:03:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:42.005 04:03:43 -- scripts/common.sh@365 -- # decimal 2 00:07:42.005 04:03:43 -- scripts/common.sh@352 -- # local d=2 00:07:42.005 04:03:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.005 04:03:43 -- scripts/common.sh@354 -- # echo 2 00:07:42.005 04:03:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:42.005 04:03:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:42.005 04:03:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:42.005 04:03:43 -- scripts/common.sh@367 -- # return 0 00:07:42.005 04:03:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.005 04:03:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:42.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.005 --rc genhtml_branch_coverage=1 00:07:42.005 --rc genhtml_function_coverage=1 00:07:42.005 --rc genhtml_legend=1 00:07:42.005 --rc geninfo_all_blocks=1 00:07:42.005 --rc geninfo_unexecuted_blocks=1 00:07:42.005 00:07:42.005 ' 00:07:42.005 04:03:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:42.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.005 --rc genhtml_branch_coverage=1 00:07:42.005 --rc genhtml_function_coverage=1 00:07:42.005 --rc genhtml_legend=1 00:07:42.005 --rc geninfo_all_blocks=1 00:07:42.005 --rc geninfo_unexecuted_blocks=1 00:07:42.005 00:07:42.005 ' 00:07:42.005 04:03:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:42.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.005 --rc genhtml_branch_coverage=1 00:07:42.005 --rc genhtml_function_coverage=1 00:07:42.005 --rc genhtml_legend=1 00:07:42.005 --rc geninfo_all_blocks=1 00:07:42.005 --rc geninfo_unexecuted_blocks=1 00:07:42.005 00:07:42.005 ' 00:07:42.005 04:03:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:42.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.005 --rc genhtml_branch_coverage=1 00:07:42.005 --rc genhtml_function_coverage=1 00:07:42.005 --rc genhtml_legend=1 00:07:42.005 --rc geninfo_all_blocks=1 00:07:42.005 --rc geninfo_unexecuted_blocks=1 00:07:42.005 00:07:42.005 ' 00:07:42.006 04:03:43 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:42.006 04:03:43 -- nvmf/common.sh@7 -- # uname -s 00:07:42.006 04:03:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.006 04:03:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.006 04:03:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.006 04:03:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.006 04:03:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.006 04:03:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.006 04:03:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.006 04:03:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.006 04:03:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.006 04:03:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.006 04:03:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:07:42.006 04:03:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:07:42.006 04:03:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.006 04:03:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.006 04:03:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:42.006 04:03:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:42.006 04:03:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.006 04:03:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.006 04:03:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.006 04:03:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.006 04:03:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.006 04:03:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.006 04:03:43 -- paths/export.sh@5 -- # export PATH 00:07:42.006 04:03:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.006 04:03:43 -- nvmf/common.sh@46 -- # : 0 00:07:42.006 04:03:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:42.006 04:03:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:42.006 04:03:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:42.006 04:03:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.006 04:03:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.006 04:03:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:42.006 04:03:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:42.006 04:03:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:42.006 04:03:43 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:42.006 04:03:43 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:42.006 04:03:43 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:42.006 04:03:43 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:42.006 04:03:43 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:42.006 04:03:43 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:42.006 04:03:43 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:42.006 04:03:43 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:42.006 04:03:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:42.006 04:03:43 -- common/autotest_common.sh@10 -- # set +x 00:07:42.006 04:03:43 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:42.006 04:03:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:42.006 04:03:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.006 04:03:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:42.006 04:03:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:42.006 04:03:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:42.006 04:03:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.006 04:03:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:42.006 04:03:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.006 04:03:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:42.006 04:03:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:42.006 04:03:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:42.006 04:03:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:42.006 04:03:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:42.006 04:03:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:42.006 04:03:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.006 04:03:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:42.006 04:03:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:42.006 04:03:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:42.006 04:03:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:42.006 04:03:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:42.006 04:03:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:42.006 04:03:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.006 04:03:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:42.006 04:03:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:42.006 04:03:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:42.006 04:03:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:42.006 04:03:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:42.006 Cannot find device "nvmf_init_br" 00:07:42.006 04:03:43 -- nvmf/common.sh@153 -- # true 00:07:42.006 04:03:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:42.265 Cannot find device "nvmf_tgt_br" 00:07:42.265 04:03:43 -- nvmf/common.sh@154 -- # true 00:07:42.265 04:03:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:42.265 Cannot find device "nvmf_tgt_br2" 00:07:42.265 04:03:43 -- nvmf/common.sh@155 -- # true 00:07:42.265 04:03:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:42.265 Cannot find device "nvmf_init_br" 00:07:42.265 04:03:43 -- nvmf/common.sh@156 -- # true 00:07:42.265 04:03:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:42.265 Cannot find device "nvmf_tgt_br" 00:07:42.265 04:03:43 -- nvmf/common.sh@157 -- # true 00:07:42.265 04:03:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:42.265 Cannot find device "nvmf_tgt_br2" 00:07:42.265 04:03:43 -- nvmf/common.sh@158 -- # true 00:07:42.265 04:03:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:42.265 Cannot find device "nvmf_br" 00:07:42.265 04:03:43 -- nvmf/common.sh@159 -- # true 00:07:42.265 04:03:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:42.265 Cannot find device "nvmf_init_if" 00:07:42.265 04:03:43 -- nvmf/common.sh@160 -- # true 00:07:42.265 04:03:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:42.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:42.265 04:03:43 -- nvmf/common.sh@161 -- # true 00:07:42.265 04:03:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:42.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:42.265 04:03:43 -- nvmf/common.sh@162 -- # true 00:07:42.265 04:03:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:42.265 04:03:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:42.265 04:03:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:42.265 04:03:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:42.265 04:03:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:42.265 04:03:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:42.265 04:03:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:42.265 04:03:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:42.265 04:03:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:42.265 04:03:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:42.265 04:03:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:42.265 04:03:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:42.265 04:03:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:42.265 04:03:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:42.265 04:03:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:42.265 04:03:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:42.265 04:03:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:42.525 04:03:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:42.525 04:03:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:42.525 04:03:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:42.525 04:03:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:42.525 04:03:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:42.525 04:03:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:42.525 04:03:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:42.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:07:42.525 00:07:42.525 --- 10.0.0.2 ping statistics --- 00:07:42.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.525 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:07:42.525 04:03:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:42.525 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:42.525 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:07:42.525 00:07:42.525 --- 10.0.0.3 ping statistics --- 00:07:42.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.525 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:07:42.525 04:03:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:42.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:07:42.525 00:07:42.525 --- 10.0.0.1 ping statistics --- 00:07:42.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.525 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:07:42.525 04:03:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.525 04:03:44 -- nvmf/common.sh@421 -- # return 0 00:07:42.525 04:03:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:42.525 04:03:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.525 04:03:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:42.525 04:03:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:42.525 04:03:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.525 04:03:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:42.525 04:03:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:42.525 04:03:44 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:42.525 04:03:44 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:42.525 04:03:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:42.525 04:03:44 -- common/autotest_common.sh@10 -- # set +x 00:07:42.525 04:03:44 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:42.525 04:03:44 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:42.525 04:03:44 -- target/nvmf_example.sh@34 -- # nvmfpid=72147 00:07:42.525 04:03:44 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:42.525 04:03:44 -- target/nvmf_example.sh@36 -- # waitforlisten 72147 00:07:42.525 04:03:44 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:42.525 04:03:44 -- common/autotest_common.sh@829 -- # '[' -z 72147 ']' 00:07:42.525 04:03:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.525 04:03:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.525 04:03:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.525 04:03:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.525 04:03:44 -- common/autotest_common.sh@10 -- # set +x 00:07:43.903 04:03:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:43.903 04:03:45 -- common/autotest_common.sh@862 -- # return 0 00:07:43.903 04:03:45 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:43.903 04:03:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:43.903 04:03:45 -- common/autotest_common.sh@10 -- # set +x 00:07:43.903 04:03:45 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:43.903 04:03:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.903 04:03:45 -- common/autotest_common.sh@10 -- # set +x 00:07:43.903 04:03:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.903 04:03:45 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:43.903 04:03:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.903 04:03:45 -- common/autotest_common.sh@10 -- # set +x 00:07:43.903 04:03:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.903 04:03:45 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:43.903 04:03:45 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:43.903 04:03:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.903 04:03:45 -- common/autotest_common.sh@10 -- # set +x 00:07:43.903 04:03:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.903 04:03:45 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:43.903 04:03:45 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:43.903 04:03:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.903 04:03:45 -- common/autotest_common.sh@10 -- # set +x 00:07:43.903 04:03:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.903 04:03:45 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.903 04:03:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.903 04:03:45 -- common/autotest_common.sh@10 -- # set +x 00:07:43.903 04:03:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.903 04:03:45 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:43.903 04:03:45 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:53.882 Initializing NVMe Controllers 00:07:53.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:53.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:53.882 Initialization complete. Launching workers. 00:07:53.882 ======================================================== 00:07:53.882 Latency(us) 00:07:53.882 Device Information : IOPS MiB/s Average min max 00:07:53.882 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17052.61 66.61 3752.76 618.41 20155.50 00:07:53.882 ======================================================== 00:07:53.882 Total : 17052.61 66.61 3752.76 618.41 20155.50 00:07:53.882 00:07:53.882 04:03:55 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:53.882 04:03:55 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:53.882 04:03:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:53.882 04:03:55 -- nvmf/common.sh@116 -- # sync 00:07:54.141 04:03:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:54.141 04:03:55 -- nvmf/common.sh@119 -- # set +e 00:07:54.141 04:03:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:54.141 04:03:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:54.141 rmmod nvme_tcp 00:07:54.141 rmmod nvme_fabrics 00:07:54.141 rmmod nvme_keyring 00:07:54.141 04:03:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:54.141 04:03:55 -- nvmf/common.sh@123 -- # set -e 00:07:54.141 04:03:55 -- nvmf/common.sh@124 -- # return 0 00:07:54.141 04:03:55 -- nvmf/common.sh@477 -- # '[' -n 72147 ']' 00:07:54.141 04:03:55 -- nvmf/common.sh@478 -- # killprocess 72147 00:07:54.141 04:03:55 -- common/autotest_common.sh@936 -- # '[' -z 72147 ']' 00:07:54.141 04:03:55 -- common/autotest_common.sh@940 -- # kill -0 72147 00:07:54.141 04:03:55 -- common/autotest_common.sh@941 -- # uname 00:07:54.141 04:03:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:54.141 04:03:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72147 00:07:54.141 04:03:55 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:07:54.141 04:03:55 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:07:54.141 killing process with pid 72147 00:07:54.141 04:03:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72147' 00:07:54.141 04:03:55 -- common/autotest_common.sh@955 -- # kill 72147 00:07:54.141 04:03:55 -- common/autotest_common.sh@960 -- # wait 72147 00:07:54.400 nvmf threads initialize successfully 00:07:54.400 bdev subsystem init successfully 00:07:54.400 created a nvmf target service 00:07:54.400 create targets's poll groups done 00:07:54.400 all subsystems of target started 00:07:54.400 nvmf target is running 00:07:54.400 all subsystems of target stopped 00:07:54.400 destroy targets's poll groups done 00:07:54.400 destroyed the nvmf target service 00:07:54.400 bdev subsystem finish successfully 00:07:54.400 nvmf threads destroy successfully 00:07:54.400 04:03:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:54.400 04:03:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:54.400 04:03:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:54.400 04:03:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:54.400 04:03:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:54.400 04:03:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.400 04:03:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.401 04:03:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.401 04:03:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:54.401 04:03:56 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:54.401 04:03:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:54.401 04:03:56 -- common/autotest_common.sh@10 -- # set +x 00:07:54.401 00:07:54.401 real 0m12.526s 00:07:54.401 user 0m44.739s 00:07:54.401 sys 0m2.010s 00:07:54.401 04:03:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:54.401 04:03:56 -- common/autotest_common.sh@10 -- # set +x 00:07:54.401 ************************************ 00:07:54.401 END TEST nvmf_example 00:07:54.401 ************************************ 00:07:54.401 04:03:56 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:54.401 04:03:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:54.401 04:03:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.401 04:03:56 -- common/autotest_common.sh@10 -- # set +x 00:07:54.401 ************************************ 00:07:54.401 START TEST nvmf_filesystem 00:07:54.401 ************************************ 00:07:54.401 04:03:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:54.663 * Looking for test storage... 00:07:54.663 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:54.663 04:03:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:54.663 04:03:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:54.663 04:03:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:54.663 04:03:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:54.663 04:03:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:54.663 04:03:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:54.663 04:03:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:54.663 04:03:56 -- scripts/common.sh@335 -- # IFS=.-: 00:07:54.663 04:03:56 -- scripts/common.sh@335 -- # read -ra ver1 00:07:54.663 04:03:56 -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.663 04:03:56 -- scripts/common.sh@336 -- # read -ra ver2 00:07:54.663 04:03:56 -- scripts/common.sh@337 -- # local 'op=<' 00:07:54.663 04:03:56 -- scripts/common.sh@339 -- # ver1_l=2 00:07:54.663 04:03:56 -- scripts/common.sh@340 -- # ver2_l=1 00:07:54.663 04:03:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:54.663 04:03:56 -- scripts/common.sh@343 -- # case "$op" in 00:07:54.663 04:03:56 -- scripts/common.sh@344 -- # : 1 00:07:54.663 04:03:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:54.663 04:03:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.663 04:03:56 -- scripts/common.sh@364 -- # decimal 1 00:07:54.663 04:03:56 -- scripts/common.sh@352 -- # local d=1 00:07:54.663 04:03:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.663 04:03:56 -- scripts/common.sh@354 -- # echo 1 00:07:54.663 04:03:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:54.663 04:03:56 -- scripts/common.sh@365 -- # decimal 2 00:07:54.663 04:03:56 -- scripts/common.sh@352 -- # local d=2 00:07:54.663 04:03:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.663 04:03:56 -- scripts/common.sh@354 -- # echo 2 00:07:54.663 04:03:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:54.663 04:03:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:54.663 04:03:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:54.663 04:03:56 -- scripts/common.sh@367 -- # return 0 00:07:54.663 04:03:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.663 04:03:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:54.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.663 --rc genhtml_branch_coverage=1 00:07:54.663 --rc genhtml_function_coverage=1 00:07:54.663 --rc genhtml_legend=1 00:07:54.663 --rc geninfo_all_blocks=1 00:07:54.663 --rc geninfo_unexecuted_blocks=1 00:07:54.663 00:07:54.663 ' 00:07:54.663 04:03:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:54.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.663 --rc genhtml_branch_coverage=1 00:07:54.663 --rc genhtml_function_coverage=1 00:07:54.663 --rc genhtml_legend=1 00:07:54.663 --rc geninfo_all_blocks=1 00:07:54.663 --rc geninfo_unexecuted_blocks=1 00:07:54.663 00:07:54.663 ' 00:07:54.663 04:03:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:54.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.663 --rc genhtml_branch_coverage=1 00:07:54.663 --rc genhtml_function_coverage=1 00:07:54.663 --rc genhtml_legend=1 00:07:54.663 --rc geninfo_all_blocks=1 00:07:54.663 --rc geninfo_unexecuted_blocks=1 00:07:54.663 00:07:54.663 ' 00:07:54.663 04:03:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:54.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.663 --rc genhtml_branch_coverage=1 00:07:54.663 --rc genhtml_function_coverage=1 00:07:54.663 --rc genhtml_legend=1 00:07:54.663 --rc geninfo_all_blocks=1 00:07:54.663 --rc geninfo_unexecuted_blocks=1 00:07:54.663 00:07:54.663 ' 00:07:54.663 04:03:56 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:07:54.663 04:03:56 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:54.663 04:03:56 -- common/autotest_common.sh@34 -- # set -e 00:07:54.663 04:03:56 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:54.663 04:03:56 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:54.663 04:03:56 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:54.663 04:03:56 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:54.663 04:03:56 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:54.663 04:03:56 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:54.663 04:03:56 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:54.663 04:03:56 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:54.663 04:03:56 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:54.663 04:03:56 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:54.663 04:03:56 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:54.663 04:03:56 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:54.663 04:03:56 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:54.663 04:03:56 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:54.663 04:03:56 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:54.663 04:03:56 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:54.663 04:03:56 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:54.663 04:03:56 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:54.663 04:03:56 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:54.663 04:03:56 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:54.663 04:03:56 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:54.663 04:03:56 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:54.663 04:03:56 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:54.663 04:03:56 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:54.663 04:03:56 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:54.663 04:03:56 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:54.663 04:03:56 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:54.663 04:03:56 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:54.663 04:03:56 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:54.663 04:03:56 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:54.663 04:03:56 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:54.663 04:03:56 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:54.663 04:03:56 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:54.663 04:03:56 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:54.663 04:03:56 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:54.663 04:03:56 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:54.663 04:03:56 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:54.663 04:03:56 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:54.663 04:03:56 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:54.663 04:03:56 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:54.663 04:03:56 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:54.663 04:03:56 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:54.663 04:03:56 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:54.663 04:03:56 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:54.663 04:03:56 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:54.663 04:03:56 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:54.663 04:03:56 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:54.663 04:03:56 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:54.664 04:03:56 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:54.664 04:03:56 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:54.664 04:03:56 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:54.664 04:03:56 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:54.664 04:03:56 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:54.664 04:03:56 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:54.664 04:03:56 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:54.664 04:03:56 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:54.664 04:03:56 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:54.664 04:03:56 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:54.664 04:03:56 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:54.664 04:03:56 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:54.664 04:03:56 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:54.664 04:03:56 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:07:54.664 04:03:56 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:54.664 04:03:56 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:54.664 04:03:56 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:54.664 04:03:56 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:54.664 04:03:56 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:54.664 04:03:56 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:54.664 04:03:56 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:54.664 04:03:56 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:54.664 04:03:56 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:54.664 04:03:56 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:07:54.664 04:03:56 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:54.664 04:03:56 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:54.664 04:03:56 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:54.664 04:03:56 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:54.664 04:03:56 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:54.664 04:03:56 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:54.664 04:03:56 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:54.664 04:03:56 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:54.664 04:03:56 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:54.664 04:03:56 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:54.664 04:03:56 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:54.664 04:03:56 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:54.664 04:03:56 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:54.664 04:03:56 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:07:54.664 04:03:56 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:07:54.664 04:03:56 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:07:54.664 04:03:56 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:07:54.664 04:03:56 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:07:54.664 04:03:56 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:07:54.664 04:03:56 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:54.664 04:03:56 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:54.664 04:03:56 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:54.664 04:03:56 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:54.664 04:03:56 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:54.664 04:03:56 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:54.664 04:03:56 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:07:54.664 04:03:56 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:54.664 #define SPDK_CONFIG_H 00:07:54.664 #define SPDK_CONFIG_APPS 1 00:07:54.664 #define SPDK_CONFIG_ARCH native 00:07:54.664 #undef SPDK_CONFIG_ASAN 00:07:54.664 #define SPDK_CONFIG_AVAHI 1 00:07:54.664 #undef SPDK_CONFIG_CET 00:07:54.664 #define SPDK_CONFIG_COVERAGE 1 00:07:54.664 #define SPDK_CONFIG_CROSS_PREFIX 00:07:54.664 #undef SPDK_CONFIG_CRYPTO 00:07:54.664 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:54.664 #undef SPDK_CONFIG_CUSTOMOCF 00:07:54.664 #undef SPDK_CONFIG_DAOS 00:07:54.664 #define SPDK_CONFIG_DAOS_DIR 00:07:54.664 #define SPDK_CONFIG_DEBUG 1 00:07:54.664 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:54.664 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:07:54.664 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:07:54.664 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:07:54.664 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:54.664 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:54.664 #define SPDK_CONFIG_EXAMPLES 1 00:07:54.664 #undef SPDK_CONFIG_FC 00:07:54.664 #define SPDK_CONFIG_FC_PATH 00:07:54.664 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:54.664 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:54.664 #undef SPDK_CONFIG_FUSE 00:07:54.664 #undef SPDK_CONFIG_FUZZER 00:07:54.664 #define SPDK_CONFIG_FUZZER_LIB 00:07:54.664 #define SPDK_CONFIG_GOLANG 1 00:07:54.664 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:54.664 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:54.664 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:54.664 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:54.664 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:54.664 #define SPDK_CONFIG_IDXD 1 00:07:54.664 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:54.664 #undef SPDK_CONFIG_IPSEC_MB 00:07:54.664 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:54.664 #define SPDK_CONFIG_ISAL 1 00:07:54.664 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:54.664 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:54.664 #define SPDK_CONFIG_LIBDIR 00:07:54.664 #undef SPDK_CONFIG_LTO 00:07:54.664 #define SPDK_CONFIG_MAX_LCORES 00:07:54.664 #define SPDK_CONFIG_NVME_CUSE 1 00:07:54.664 #undef SPDK_CONFIG_OCF 00:07:54.664 #define SPDK_CONFIG_OCF_PATH 00:07:54.664 #define SPDK_CONFIG_OPENSSL_PATH 00:07:54.664 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:54.664 #undef SPDK_CONFIG_PGO_USE 00:07:54.664 #define SPDK_CONFIG_PREFIX /usr/local 00:07:54.664 #undef SPDK_CONFIG_RAID5F 00:07:54.664 #undef SPDK_CONFIG_RBD 00:07:54.664 #define SPDK_CONFIG_RDMA 1 00:07:54.664 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:54.664 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:54.664 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:54.664 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:54.664 #define SPDK_CONFIG_SHARED 1 00:07:54.664 #undef SPDK_CONFIG_SMA 00:07:54.664 #define SPDK_CONFIG_TESTS 1 00:07:54.664 #undef SPDK_CONFIG_TSAN 00:07:54.664 #define SPDK_CONFIG_UBLK 1 00:07:54.664 #define SPDK_CONFIG_UBSAN 1 00:07:54.664 #undef SPDK_CONFIG_UNIT_TESTS 00:07:54.664 #undef SPDK_CONFIG_URING 00:07:54.664 #define SPDK_CONFIG_URING_PATH 00:07:54.664 #undef SPDK_CONFIG_URING_ZNS 00:07:54.664 #define SPDK_CONFIG_USDT 1 00:07:54.664 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:54.664 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:54.664 #undef SPDK_CONFIG_VFIO_USER 00:07:54.664 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:54.664 #define SPDK_CONFIG_VHOST 1 00:07:54.664 #define SPDK_CONFIG_VIRTIO 1 00:07:54.664 #undef SPDK_CONFIG_VTUNE 00:07:54.664 #define SPDK_CONFIG_VTUNE_DIR 00:07:54.664 #define SPDK_CONFIG_WERROR 1 00:07:54.664 #define SPDK_CONFIG_WPDK_DIR 00:07:54.664 #undef SPDK_CONFIG_XNVME 00:07:54.664 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:54.664 04:03:56 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:54.664 04:03:56 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:54.664 04:03:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.664 04:03:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.664 04:03:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.664 04:03:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.664 04:03:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.664 04:03:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.664 04:03:56 -- paths/export.sh@5 -- # export PATH 00:07:54.664 04:03:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.664 04:03:56 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:54.664 04:03:56 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:54.664 04:03:56 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:54.664 04:03:56 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:54.664 04:03:56 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:07:54.664 04:03:56 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:07:54.664 04:03:56 -- pm/common@16 -- # TEST_TAG=N/A 00:07:54.664 04:03:56 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:07:54.664 04:03:56 -- common/autotest_common.sh@52 -- # : 1 00:07:54.665 04:03:56 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:54.665 04:03:56 -- common/autotest_common.sh@56 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:54.665 04:03:56 -- common/autotest_common.sh@58 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:54.665 04:03:56 -- common/autotest_common.sh@60 -- # : 1 00:07:54.665 04:03:56 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:54.665 04:03:56 -- common/autotest_common.sh@62 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:54.665 04:03:56 -- common/autotest_common.sh@64 -- # : 00:07:54.665 04:03:56 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:54.665 04:03:56 -- common/autotest_common.sh@66 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:54.665 04:03:56 -- common/autotest_common.sh@68 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:54.665 04:03:56 -- common/autotest_common.sh@70 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:54.665 04:03:56 -- common/autotest_common.sh@72 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:54.665 04:03:56 -- common/autotest_common.sh@74 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:54.665 04:03:56 -- common/autotest_common.sh@76 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:54.665 04:03:56 -- common/autotest_common.sh@78 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:54.665 04:03:56 -- common/autotest_common.sh@80 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:54.665 04:03:56 -- common/autotest_common.sh@82 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:54.665 04:03:56 -- common/autotest_common.sh@84 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:54.665 04:03:56 -- common/autotest_common.sh@86 -- # : 1 00:07:54.665 04:03:56 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:54.665 04:03:56 -- common/autotest_common.sh@88 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:54.665 04:03:56 -- common/autotest_common.sh@90 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:54.665 04:03:56 -- common/autotest_common.sh@92 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:54.665 04:03:56 -- common/autotest_common.sh@94 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:54.665 04:03:56 -- common/autotest_common.sh@96 -- # : tcp 00:07:54.665 04:03:56 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:54.665 04:03:56 -- common/autotest_common.sh@98 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:54.665 04:03:56 -- common/autotest_common.sh@100 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:54.665 04:03:56 -- common/autotest_common.sh@102 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:54.665 04:03:56 -- common/autotest_common.sh@104 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:54.665 04:03:56 -- common/autotest_common.sh@106 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:54.665 04:03:56 -- common/autotest_common.sh@108 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:54.665 04:03:56 -- common/autotest_common.sh@110 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:54.665 04:03:56 -- common/autotest_common.sh@112 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:54.665 04:03:56 -- common/autotest_common.sh@114 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:54.665 04:03:56 -- common/autotest_common.sh@116 -- # : 1 00:07:54.665 04:03:56 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:54.665 04:03:56 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:07:54.665 04:03:56 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:54.665 04:03:56 -- common/autotest_common.sh@120 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:54.665 04:03:56 -- common/autotest_common.sh@122 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:54.665 04:03:56 -- common/autotest_common.sh@124 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:54.665 04:03:56 -- common/autotest_common.sh@126 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:54.665 04:03:56 -- common/autotest_common.sh@128 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:54.665 04:03:56 -- common/autotest_common.sh@130 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:54.665 04:03:56 -- common/autotest_common.sh@132 -- # : v23.11 00:07:54.665 04:03:56 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:54.665 04:03:56 -- common/autotest_common.sh@134 -- # : true 00:07:54.665 04:03:56 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:54.665 04:03:56 -- common/autotest_common.sh@136 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:54.665 04:03:56 -- common/autotest_common.sh@138 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:54.665 04:03:56 -- common/autotest_common.sh@140 -- # : 1 00:07:54.665 04:03:56 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:54.665 04:03:56 -- common/autotest_common.sh@142 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:54.665 04:03:56 -- common/autotest_common.sh@144 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:54.665 04:03:56 -- common/autotest_common.sh@146 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:54.665 04:03:56 -- common/autotest_common.sh@148 -- # : 00:07:54.665 04:03:56 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:54.665 04:03:56 -- common/autotest_common.sh@150 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:54.665 04:03:56 -- common/autotest_common.sh@152 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:54.665 04:03:56 -- common/autotest_common.sh@154 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:54.665 04:03:56 -- common/autotest_common.sh@156 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:54.665 04:03:56 -- common/autotest_common.sh@158 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:54.665 04:03:56 -- common/autotest_common.sh@160 -- # : 0 00:07:54.665 04:03:56 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:54.665 04:03:56 -- common/autotest_common.sh@163 -- # : 00:07:54.665 04:03:56 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:54.665 04:03:56 -- common/autotest_common.sh@165 -- # : 1 00:07:54.665 04:03:56 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:54.665 04:03:56 -- common/autotest_common.sh@167 -- # : 1 00:07:54.665 04:03:56 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:54.665 04:03:56 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:54.665 04:03:56 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:54.665 04:03:56 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:54.665 04:03:56 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:54.665 04:03:56 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:54.665 04:03:56 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:54.665 04:03:56 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:54.665 04:03:56 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:54.665 04:03:56 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:54.665 04:03:56 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:54.665 04:03:56 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:54.665 04:03:56 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:54.665 04:03:56 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:54.665 04:03:56 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:54.665 04:03:56 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:54.665 04:03:56 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:54.666 04:03:56 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:54.666 04:03:56 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:54.666 04:03:56 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:54.666 04:03:56 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:54.666 04:03:56 -- common/autotest_common.sh@196 -- # cat 00:07:54.666 04:03:56 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:54.666 04:03:56 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:54.666 04:03:56 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:54.666 04:03:56 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:54.666 04:03:56 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:54.666 04:03:56 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:54.666 04:03:56 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:54.666 04:03:56 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:54.666 04:03:56 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:54.666 04:03:56 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:54.666 04:03:56 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:54.666 04:03:56 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:54.666 04:03:56 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:54.666 04:03:56 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:54.666 04:03:56 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:54.666 04:03:56 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:54.666 04:03:56 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:54.666 04:03:56 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:54.666 04:03:56 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:54.666 04:03:56 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:07:54.666 04:03:56 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:07:54.666 04:03:56 -- common/autotest_common.sh@249 -- # _LCOV= 00:07:54.666 04:03:56 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:07:54.666 04:03:56 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:07:54.666 04:03:56 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:54.666 04:03:56 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:07:54.666 04:03:56 -- common/autotest_common.sh@255 -- # lcov_opt= 00:07:54.666 04:03:56 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:07:54.666 04:03:56 -- common/autotest_common.sh@259 -- # export valgrind= 00:07:54.666 04:03:56 -- common/autotest_common.sh@259 -- # valgrind= 00:07:54.666 04:03:56 -- common/autotest_common.sh@265 -- # uname -s 00:07:54.666 04:03:56 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:07:54.666 04:03:56 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:07:54.666 04:03:56 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:07:54.666 04:03:56 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:07:54.666 04:03:56 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:54.666 04:03:56 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:54.666 04:03:56 -- common/autotest_common.sh@275 -- # MAKE=make 00:07:54.666 04:03:56 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:07:54.666 04:03:56 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:07:54.666 04:03:56 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:07:54.666 04:03:56 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:07:54.666 04:03:56 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:07:54.666 04:03:56 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:07:54.666 04:03:56 -- common/autotest_common.sh@301 -- # for i in "$@" 00:07:54.666 04:03:56 -- common/autotest_common.sh@302 -- # case "$i" in 00:07:54.666 04:03:56 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=tcp 00:07:54.666 04:03:56 -- common/autotest_common.sh@319 -- # [[ -z 72392 ]] 00:07:54.666 04:03:56 -- common/autotest_common.sh@319 -- # kill -0 72392 00:07:54.666 04:03:56 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:07:54.666 04:03:56 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:07:54.666 04:03:56 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:07:54.666 04:03:56 -- common/autotest_common.sh@332 -- # local mount target_dir 00:07:54.666 04:03:56 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:07:54.666 04:03:56 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:07:54.666 04:03:56 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:07:54.666 04:03:56 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:07:54.666 04:03:56 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.YEpcyE 00:07:54.666 04:03:56 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:54.666 04:03:56 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:07:54.666 04:03:56 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:07:54.666 04:03:56 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.YEpcyE/tests/target /tmp/spdk.YEpcyE 00:07:54.666 04:03:56 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:07:54.666 04:03:56 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:54.666 04:03:56 -- common/autotest_common.sh@328 -- # df -T 00:07:54.666 04:03:56 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:07:54.666 04:03:56 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:07:54.666 04:03:56 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:07:54.666 04:03:56 -- common/autotest_common.sh@363 -- # avails["$mount"]=13296082944 00:07:54.666 04:03:56 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:07:54.666 04:03:56 -- common/autotest_common.sh@364 -- # uses["$mount"]=6287859712 00:07:54.666 04:03:56 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:54.666 04:03:56 -- common/autotest_common.sh@362 -- # mounts["$mount"]=devtmpfs 00:07:54.666 04:03:56 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:07:54.666 04:03:56 -- common/autotest_common.sh@363 -- # avails["$mount"]=4194304 00:07:54.666 04:03:56 -- common/autotest_common.sh@363 -- # sizes["$mount"]=4194304 00:07:54.666 04:03:56 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:07:54.666 04:03:56 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:54.666 04:03:56 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:54.666 04:03:56 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:54.666 04:03:56 -- common/autotest_common.sh@363 -- # avails["$mount"]=6265163776 00:07:54.666 04:03:56 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266421248 00:07:54.666 04:03:56 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:07:54.666 04:03:56 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:54.666 04:03:56 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:54.666 04:03:56 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:54.666 04:03:56 -- common/autotest_common.sh@363 -- # avails["$mount"]=2493755392 00:07:54.666 04:03:56 -- common/autotest_common.sh@363 -- # sizes["$mount"]=2506571776 00:07:54.666 04:03:56 -- common/autotest_common.sh@364 -- # uses["$mount"]=12816384 00:07:54.666 04:03:56 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:54.666 04:03:56 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:07:54.666 04:03:56 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:07:54.666 04:03:56 -- common/autotest_common.sh@363 -- # avails["$mount"]=13296082944 00:07:54.666 04:03:56 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:07:54.666 04:03:56 -- common/autotest_common.sh@364 -- # uses["$mount"]=6287859712 00:07:54.666 04:03:56 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:54.666 04:03:56 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:54.666 04:03:56 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:54.666 04:03:56 -- common/autotest_common.sh@363 -- # avails["$mount"]=6266286080 00:07:54.666 04:03:56 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266425344 00:07:54.666 04:03:56 -- common/autotest_common.sh@364 -- # uses["$mount"]=139264 00:07:54.666 04:03:56 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:54.666 04:03:56 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda2 00:07:54.666 04:03:56 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:07:54.666 04:03:56 -- common/autotest_common.sh@363 -- # avails["$mount"]=840085504 00:07:54.666 04:03:56 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1012768768 00:07:54.666 04:03:56 -- common/autotest_common.sh@364 -- # uses["$mount"]=103477248 00:07:54.666 04:03:56 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:54.666 04:03:56 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda3 00:07:54.666 04:03:56 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:07:54.666 04:03:56 -- common/autotest_common.sh@363 -- # avails["$mount"]=91617280 00:07:54.666 04:03:56 -- common/autotest_common.sh@363 -- # sizes["$mount"]=104607744 00:07:54.666 04:03:56 -- common/autotest_common.sh@364 -- # uses["$mount"]=12990464 00:07:54.666 04:03:56 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:54.666 04:03:56 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:54.666 04:03:56 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:54.666 04:03:56 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253269504 00:07:54.666 04:03:56 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253281792 00:07:54.666 04:03:56 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:07:54.666 04:03:56 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:54.666 04:03:56 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:07:54.666 04:03:56 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:07:54.666 04:03:56 -- common/autotest_common.sh@363 -- # avails["$mount"]=98364866560 00:07:54.666 04:03:56 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:07:54.666 04:03:56 -- common/autotest_common.sh@364 -- # uses["$mount"]=1337913344 00:07:54.666 04:03:56 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:54.666 04:03:56 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:07:54.666 * Looking for test storage... 00:07:54.666 04:03:56 -- common/autotest_common.sh@369 -- # local target_space new_size 00:07:54.667 04:03:56 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:07:54.667 04:03:56 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:54.667 04:03:56 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:54.667 04:03:56 -- common/autotest_common.sh@373 -- # mount=/home 00:07:54.667 04:03:56 -- common/autotest_common.sh@375 -- # target_space=13296082944 00:07:54.667 04:03:56 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:07:54.667 04:03:56 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:07:54.667 04:03:56 -- common/autotest_common.sh@381 -- # [[ btrfs == tmpfs ]] 00:07:54.667 04:03:56 -- common/autotest_common.sh@381 -- # [[ btrfs == ramfs ]] 00:07:54.667 04:03:56 -- common/autotest_common.sh@381 -- # [[ /home == / ]] 00:07:54.667 04:03:56 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:54.667 04:03:56 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:54.667 04:03:56 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:54.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:54.667 04:03:56 -- common/autotest_common.sh@390 -- # return 0 00:07:54.667 04:03:56 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:07:54.667 04:03:56 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:07:54.667 04:03:56 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:54.667 04:03:56 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:54.667 04:03:56 -- common/autotest_common.sh@1682 -- # true 00:07:54.667 04:03:56 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:07:54.667 04:03:56 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:54.667 04:03:56 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:54.667 04:03:56 -- common/autotest_common.sh@27 -- # exec 00:07:54.667 04:03:56 -- common/autotest_common.sh@29 -- # exec 00:07:54.667 04:03:56 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:54.667 04:03:56 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:54.667 04:03:56 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:54.667 04:03:56 -- common/autotest_common.sh@18 -- # set -x 00:07:54.667 04:03:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:54.667 04:03:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:54.667 04:03:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:54.927 04:03:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:54.927 04:03:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:54.927 04:03:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:54.927 04:03:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:54.927 04:03:56 -- scripts/common.sh@335 -- # IFS=.-: 00:07:54.927 04:03:56 -- scripts/common.sh@335 -- # read -ra ver1 00:07:54.927 04:03:56 -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.927 04:03:56 -- scripts/common.sh@336 -- # read -ra ver2 00:07:54.927 04:03:56 -- scripts/common.sh@337 -- # local 'op=<' 00:07:54.927 04:03:56 -- scripts/common.sh@339 -- # ver1_l=2 00:07:54.927 04:03:56 -- scripts/common.sh@340 -- # ver2_l=1 00:07:54.927 04:03:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:54.927 04:03:56 -- scripts/common.sh@343 -- # case "$op" in 00:07:54.927 04:03:56 -- scripts/common.sh@344 -- # : 1 00:07:54.927 04:03:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:54.927 04:03:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.927 04:03:56 -- scripts/common.sh@364 -- # decimal 1 00:07:54.927 04:03:56 -- scripts/common.sh@352 -- # local d=1 00:07:54.927 04:03:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.927 04:03:56 -- scripts/common.sh@354 -- # echo 1 00:07:54.927 04:03:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:54.927 04:03:56 -- scripts/common.sh@365 -- # decimal 2 00:07:54.927 04:03:56 -- scripts/common.sh@352 -- # local d=2 00:07:54.927 04:03:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.927 04:03:56 -- scripts/common.sh@354 -- # echo 2 00:07:54.927 04:03:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:54.927 04:03:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:54.927 04:03:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:54.927 04:03:56 -- scripts/common.sh@367 -- # return 0 00:07:54.927 04:03:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.927 04:03:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:54.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.927 --rc genhtml_branch_coverage=1 00:07:54.927 --rc genhtml_function_coverage=1 00:07:54.927 --rc genhtml_legend=1 00:07:54.927 --rc geninfo_all_blocks=1 00:07:54.927 --rc geninfo_unexecuted_blocks=1 00:07:54.927 00:07:54.927 ' 00:07:54.927 04:03:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:54.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.927 --rc genhtml_branch_coverage=1 00:07:54.927 --rc genhtml_function_coverage=1 00:07:54.927 --rc genhtml_legend=1 00:07:54.927 --rc geninfo_all_blocks=1 00:07:54.927 --rc geninfo_unexecuted_blocks=1 00:07:54.927 00:07:54.927 ' 00:07:54.927 04:03:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:54.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.927 --rc genhtml_branch_coverage=1 00:07:54.927 --rc genhtml_function_coverage=1 00:07:54.927 --rc genhtml_legend=1 00:07:54.927 --rc geninfo_all_blocks=1 00:07:54.927 --rc geninfo_unexecuted_blocks=1 00:07:54.927 00:07:54.927 ' 00:07:54.927 04:03:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:54.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.927 --rc genhtml_branch_coverage=1 00:07:54.927 --rc genhtml_function_coverage=1 00:07:54.927 --rc genhtml_legend=1 00:07:54.927 --rc geninfo_all_blocks=1 00:07:54.927 --rc geninfo_unexecuted_blocks=1 00:07:54.927 00:07:54.927 ' 00:07:54.927 04:03:56 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:54.927 04:03:56 -- nvmf/common.sh@7 -- # uname -s 00:07:54.927 04:03:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.927 04:03:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.927 04:03:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.927 04:03:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.927 04:03:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.927 04:03:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.927 04:03:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.927 04:03:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.927 04:03:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.927 04:03:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.927 04:03:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:07:54.927 04:03:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:07:54.927 04:03:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.927 04:03:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.927 04:03:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:54.927 04:03:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:54.928 04:03:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.928 04:03:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.928 04:03:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.928 04:03:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.928 04:03:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.928 04:03:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.928 04:03:56 -- paths/export.sh@5 -- # export PATH 00:07:54.928 04:03:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.928 04:03:56 -- nvmf/common.sh@46 -- # : 0 00:07:54.928 04:03:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:54.928 04:03:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:54.928 04:03:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:54.928 04:03:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.928 04:03:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.928 04:03:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:54.928 04:03:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:54.928 04:03:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:54.928 04:03:56 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:54.928 04:03:56 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:54.928 04:03:56 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:54.928 04:03:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:54.928 04:03:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.928 04:03:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:54.928 04:03:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:54.928 04:03:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:54.928 04:03:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.928 04:03:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.928 04:03:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.928 04:03:56 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:54.928 04:03:56 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:54.928 04:03:56 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:54.928 04:03:56 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:54.928 04:03:56 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:54.928 04:03:56 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:54.928 04:03:56 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:54.928 04:03:56 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:54.928 04:03:56 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:54.928 04:03:56 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:54.928 04:03:56 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:54.928 04:03:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:54.928 04:03:56 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:54.928 04:03:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:54.928 04:03:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:54.928 04:03:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:54.928 04:03:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:54.928 04:03:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:54.928 04:03:56 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:54.928 04:03:56 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:54.928 Cannot find device "nvmf_tgt_br" 00:07:54.928 04:03:56 -- nvmf/common.sh@154 -- # true 00:07:54.928 04:03:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:54.928 Cannot find device "nvmf_tgt_br2" 00:07:54.928 04:03:56 -- nvmf/common.sh@155 -- # true 00:07:54.928 04:03:56 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:54.928 04:03:56 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:54.928 Cannot find device "nvmf_tgt_br" 00:07:54.928 04:03:56 -- nvmf/common.sh@157 -- # true 00:07:54.928 04:03:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:54.928 Cannot find device "nvmf_tgt_br2" 00:07:54.928 04:03:56 -- nvmf/common.sh@158 -- # true 00:07:54.928 04:03:56 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:54.928 04:03:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:54.928 04:03:56 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:54.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:54.928 04:03:56 -- nvmf/common.sh@161 -- # true 00:07:54.928 04:03:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:54.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:54.928 04:03:56 -- nvmf/common.sh@162 -- # true 00:07:54.928 04:03:56 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:54.928 04:03:56 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:54.928 04:03:56 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:54.928 04:03:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:54.928 04:03:56 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:55.188 04:03:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:55.188 04:03:56 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:55.188 04:03:56 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:55.188 04:03:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:55.188 04:03:56 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:55.188 04:03:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:55.188 04:03:56 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:55.188 04:03:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:55.188 04:03:56 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:55.188 04:03:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:55.188 04:03:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:55.188 04:03:56 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:55.188 04:03:56 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:55.188 04:03:56 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:55.188 04:03:56 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:55.188 04:03:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:55.188 04:03:56 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:55.188 04:03:56 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:55.188 04:03:56 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:55.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:55.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:07:55.188 00:07:55.188 --- 10.0.0.2 ping statistics --- 00:07:55.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.188 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:07:55.188 04:03:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:55.188 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:55.188 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:07:55.188 00:07:55.188 --- 10.0.0.3 ping statistics --- 00:07:55.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.188 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:07:55.188 04:03:56 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:55.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:55.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:07:55.188 00:07:55.188 --- 10.0.0.1 ping statistics --- 00:07:55.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.188 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:07:55.188 04:03:56 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:55.188 04:03:56 -- nvmf/common.sh@421 -- # return 0 00:07:55.188 04:03:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:55.188 04:03:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:55.188 04:03:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:55.188 04:03:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:55.188 04:03:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:55.188 04:03:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:55.188 04:03:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:55.188 04:03:56 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:55.188 04:03:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:55.188 04:03:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.188 04:03:56 -- common/autotest_common.sh@10 -- # set +x 00:07:55.188 ************************************ 00:07:55.188 START TEST nvmf_filesystem_no_in_capsule 00:07:55.188 ************************************ 00:07:55.188 04:03:56 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:07:55.188 04:03:56 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:55.188 04:03:56 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:55.188 04:03:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:55.188 04:03:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:55.188 04:03:56 -- common/autotest_common.sh@10 -- # set +x 00:07:55.188 04:03:56 -- nvmf/common.sh@469 -- # nvmfpid=72566 00:07:55.188 04:03:56 -- nvmf/common.sh@470 -- # waitforlisten 72566 00:07:55.188 04:03:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:55.188 04:03:56 -- common/autotest_common.sh@829 -- # '[' -z 72566 ']' 00:07:55.188 04:03:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.188 04:03:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:55.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.188 04:03:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.188 04:03:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:55.188 04:03:56 -- common/autotest_common.sh@10 -- # set +x 00:07:55.188 [2024-11-26 04:03:56.923898] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:55.188 [2024-11-26 04:03:56.923958] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.447 [2024-11-26 04:03:57.055802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:55.447 [2024-11-26 04:03:57.123535] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:55.447 [2024-11-26 04:03:57.123674] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.447 [2024-11-26 04:03:57.123686] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.447 [2024-11-26 04:03:57.123693] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.447 [2024-11-26 04:03:57.123821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.447 [2024-11-26 04:03:57.124612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.447 [2024-11-26 04:03:57.124760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.447 [2024-11-26 04:03:57.124767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.383 04:03:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.383 04:03:57 -- common/autotest_common.sh@862 -- # return 0 00:07:56.383 04:03:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:56.383 04:03:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:56.383 04:03:57 -- common/autotest_common.sh@10 -- # set +x 00:07:56.383 04:03:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.383 04:03:57 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:56.383 04:03:57 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:56.383 04:03:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.383 04:03:57 -- common/autotest_common.sh@10 -- # set +x 00:07:56.383 [2024-11-26 04:03:57.984165] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.383 04:03:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.383 04:03:58 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:56.383 04:03:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.383 04:03:58 -- common/autotest_common.sh@10 -- # set +x 00:07:56.642 Malloc1 00:07:56.642 04:03:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.642 04:03:58 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:56.642 04:03:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.642 04:03:58 -- common/autotest_common.sh@10 -- # set +x 00:07:56.642 04:03:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.642 04:03:58 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:56.642 04:03:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.642 04:03:58 -- common/autotest_common.sh@10 -- # set +x 00:07:56.642 04:03:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.642 04:03:58 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:56.642 04:03:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.642 04:03:58 -- common/autotest_common.sh@10 -- # set +x 00:07:56.642 [2024-11-26 04:03:58.178573] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.642 04:03:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.642 04:03:58 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:56.642 04:03:58 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:07:56.642 04:03:58 -- common/autotest_common.sh@1368 -- # local bdev_info 00:07:56.642 04:03:58 -- common/autotest_common.sh@1369 -- # local bs 00:07:56.642 04:03:58 -- common/autotest_common.sh@1370 -- # local nb 00:07:56.642 04:03:58 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:56.642 04:03:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.642 04:03:58 -- common/autotest_common.sh@10 -- # set +x 00:07:56.642 04:03:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.642 04:03:58 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:07:56.642 { 00:07:56.642 "aliases": [ 00:07:56.642 "20c9c7ab-d436-4835-b1af-ab8fadb48424" 00:07:56.642 ], 00:07:56.642 "assigned_rate_limits": { 00:07:56.642 "r_mbytes_per_sec": 0, 00:07:56.642 "rw_ios_per_sec": 0, 00:07:56.642 "rw_mbytes_per_sec": 0, 00:07:56.642 "w_mbytes_per_sec": 0 00:07:56.642 }, 00:07:56.642 "block_size": 512, 00:07:56.642 "claim_type": "exclusive_write", 00:07:56.642 "claimed": true, 00:07:56.642 "driver_specific": {}, 00:07:56.642 "memory_domains": [ 00:07:56.642 { 00:07:56.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.642 "dma_device_type": 2 00:07:56.642 } 00:07:56.642 ], 00:07:56.642 "name": "Malloc1", 00:07:56.642 "num_blocks": 1048576, 00:07:56.642 "product_name": "Malloc disk", 00:07:56.642 "supported_io_types": { 00:07:56.642 "abort": true, 00:07:56.642 "compare": false, 00:07:56.642 "compare_and_write": false, 00:07:56.642 "flush": true, 00:07:56.642 "nvme_admin": false, 00:07:56.642 "nvme_io": false, 00:07:56.642 "read": true, 00:07:56.642 "reset": true, 00:07:56.642 "unmap": true, 00:07:56.642 "write": true, 00:07:56.642 "write_zeroes": true 00:07:56.642 }, 00:07:56.642 "uuid": "20c9c7ab-d436-4835-b1af-ab8fadb48424", 00:07:56.642 "zoned": false 00:07:56.642 } 00:07:56.642 ]' 00:07:56.642 04:03:58 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:07:56.642 04:03:58 -- common/autotest_common.sh@1372 -- # bs=512 00:07:56.642 04:03:58 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:07:56.642 04:03:58 -- common/autotest_common.sh@1373 -- # nb=1048576 00:07:56.642 04:03:58 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:07:56.642 04:03:58 -- common/autotest_common.sh@1377 -- # echo 512 00:07:56.642 04:03:58 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:56.642 04:03:58 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:56.902 04:03:58 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:56.902 04:03:58 -- common/autotest_common.sh@1187 -- # local i=0 00:07:56.902 04:03:58 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:07:56.902 04:03:58 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:07:56.902 04:03:58 -- common/autotest_common.sh@1194 -- # sleep 2 00:07:58.805 04:04:00 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:07:58.805 04:04:00 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:07:58.805 04:04:00 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:07:58.805 04:04:00 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:07:58.805 04:04:00 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:07:58.805 04:04:00 -- common/autotest_common.sh@1197 -- # return 0 00:07:58.805 04:04:00 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:58.805 04:04:00 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:58.805 04:04:00 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:58.805 04:04:00 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:58.805 04:04:00 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:58.805 04:04:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:58.805 04:04:00 -- setup/common.sh@80 -- # echo 536870912 00:07:58.805 04:04:00 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:58.805 04:04:00 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:58.805 04:04:00 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:58.805 04:04:00 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:59.064 04:04:00 -- target/filesystem.sh@69 -- # partprobe 00:07:59.064 04:04:00 -- target/filesystem.sh@70 -- # sleep 1 00:08:00.093 04:04:01 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:00.093 04:04:01 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:00.093 04:04:01 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:00.093 04:04:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:00.093 04:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:00.093 ************************************ 00:08:00.093 START TEST filesystem_ext4 00:08:00.093 ************************************ 00:08:00.093 04:04:01 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:00.093 04:04:01 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:00.093 04:04:01 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:00.093 04:04:01 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:00.093 04:04:01 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:00.093 04:04:01 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:00.093 04:04:01 -- common/autotest_common.sh@914 -- # local i=0 00:08:00.093 04:04:01 -- common/autotest_common.sh@915 -- # local force 00:08:00.093 04:04:01 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:00.093 04:04:01 -- common/autotest_common.sh@918 -- # force=-F 00:08:00.093 04:04:01 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:00.093 mke2fs 1.47.0 (5-Feb-2023) 00:08:00.093 Discarding device blocks: 0/522240 done 00:08:00.093 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:00.093 Filesystem UUID: ad39637c-5652-4518-9325-3dc7fa9882c5 00:08:00.093 Superblock backups stored on blocks: 00:08:00.093 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:00.093 00:08:00.093 Allocating group tables: 0/64 done 00:08:00.093 Writing inode tables: 0/64 done 00:08:00.352 Creating journal (8192 blocks): done 00:08:00.352 Writing superblocks and filesystem accounting information: 0/64 done 00:08:00.352 00:08:00.352 04:04:01 -- common/autotest_common.sh@931 -- # return 0 00:08:00.352 04:04:01 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:05.621 04:04:07 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:05.621 04:04:07 -- target/filesystem.sh@25 -- # sync 00:08:05.621 04:04:07 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:05.621 04:04:07 -- target/filesystem.sh@27 -- # sync 00:08:05.621 04:04:07 -- target/filesystem.sh@29 -- # i=0 00:08:05.621 04:04:07 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:05.621 04:04:07 -- target/filesystem.sh@37 -- # kill -0 72566 00:08:05.621 04:04:07 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:05.621 04:04:07 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:05.621 04:04:07 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:05.621 04:04:07 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:05.621 00:08:05.621 real 0m5.570s 00:08:05.621 user 0m0.025s 00:08:05.621 sys 0m0.072s 00:08:05.621 04:04:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:05.621 04:04:07 -- common/autotest_common.sh@10 -- # set +x 00:08:05.621 ************************************ 00:08:05.621 END TEST filesystem_ext4 00:08:05.621 ************************************ 00:08:05.621 04:04:07 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:05.621 04:04:07 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:05.621 04:04:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.621 04:04:07 -- common/autotest_common.sh@10 -- # set +x 00:08:05.621 ************************************ 00:08:05.621 START TEST filesystem_btrfs 00:08:05.621 ************************************ 00:08:05.621 04:04:07 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:05.621 04:04:07 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:05.621 04:04:07 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:05.621 04:04:07 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:05.621 04:04:07 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:05.621 04:04:07 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:05.621 04:04:07 -- common/autotest_common.sh@914 -- # local i=0 00:08:05.621 04:04:07 -- common/autotest_common.sh@915 -- # local force 00:08:05.621 04:04:07 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:05.621 04:04:07 -- common/autotest_common.sh@920 -- # force=-f 00:08:05.621 04:04:07 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:05.881 btrfs-progs v6.8.1 00:08:05.881 See https://btrfs.readthedocs.io for more information. 00:08:05.881 00:08:05.881 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:05.881 NOTE: several default settings have changed in version 5.15, please make sure 00:08:05.881 this does not affect your deployments: 00:08:05.881 - DUP for metadata (-m dup) 00:08:05.881 - enabled no-holes (-O no-holes) 00:08:05.881 - enabled free-space-tree (-R free-space-tree) 00:08:05.881 00:08:05.881 Label: (null) 00:08:05.881 UUID: 62b7d50c-2a33-4a7e-8f34-90a03b25c392 00:08:05.881 Node size: 16384 00:08:05.881 Sector size: 4096 (CPU page size: 4096) 00:08:05.881 Filesystem size: 510.00MiB 00:08:05.881 Block group profiles: 00:08:05.881 Data: single 8.00MiB 00:08:05.881 Metadata: DUP 32.00MiB 00:08:05.881 System: DUP 8.00MiB 00:08:05.881 SSD detected: yes 00:08:05.881 Zoned device: no 00:08:05.881 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:05.881 Checksum: crc32c 00:08:05.881 Number of devices: 1 00:08:05.881 Devices: 00:08:05.881 ID SIZE PATH 00:08:05.881 1 510.00MiB /dev/nvme0n1p1 00:08:05.881 00:08:05.881 04:04:07 -- common/autotest_common.sh@931 -- # return 0 00:08:05.881 04:04:07 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:05.881 04:04:07 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:05.881 04:04:07 -- target/filesystem.sh@25 -- # sync 00:08:05.881 04:04:07 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:05.881 04:04:07 -- target/filesystem.sh@27 -- # sync 00:08:05.881 04:04:07 -- target/filesystem.sh@29 -- # i=0 00:08:05.881 04:04:07 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:05.881 04:04:07 -- target/filesystem.sh@37 -- # kill -0 72566 00:08:05.881 04:04:07 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:05.881 04:04:07 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:05.881 04:04:07 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:05.881 04:04:07 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:05.881 00:08:05.881 real 0m0.279s 00:08:05.881 user 0m0.022s 00:08:05.881 sys 0m0.064s 00:08:05.881 04:04:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:05.881 04:04:07 -- common/autotest_common.sh@10 -- # set +x 00:08:05.881 ************************************ 00:08:05.881 END TEST filesystem_btrfs 00:08:05.881 ************************************ 00:08:06.141 04:04:07 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:06.141 04:04:07 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:06.141 04:04:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:06.141 04:04:07 -- common/autotest_common.sh@10 -- # set +x 00:08:06.141 ************************************ 00:08:06.141 START TEST filesystem_xfs 00:08:06.141 ************************************ 00:08:06.141 04:04:07 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:06.141 04:04:07 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:06.141 04:04:07 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:06.141 04:04:07 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:06.141 04:04:07 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:06.141 04:04:07 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:06.141 04:04:07 -- common/autotest_common.sh@914 -- # local i=0 00:08:06.141 04:04:07 -- common/autotest_common.sh@915 -- # local force 00:08:06.141 04:04:07 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:06.141 04:04:07 -- common/autotest_common.sh@920 -- # force=-f 00:08:06.141 04:04:07 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:06.141 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:06.141 = sectsz=512 attr=2, projid32bit=1 00:08:06.141 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:06.141 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:06.141 data = bsize=4096 blocks=130560, imaxpct=25 00:08:06.141 = sunit=0 swidth=0 blks 00:08:06.141 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:06.141 log =internal log bsize=4096 blocks=16384, version=2 00:08:06.141 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:06.141 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:07.078 Discarding blocks...Done. 00:08:07.078 04:04:08 -- common/autotest_common.sh@931 -- # return 0 00:08:07.078 04:04:08 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:09.621 04:04:10 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:09.621 04:04:10 -- target/filesystem.sh@25 -- # sync 00:08:09.621 04:04:10 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:09.621 04:04:10 -- target/filesystem.sh@27 -- # sync 00:08:09.621 04:04:10 -- target/filesystem.sh@29 -- # i=0 00:08:09.621 04:04:10 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:09.621 04:04:10 -- target/filesystem.sh@37 -- # kill -0 72566 00:08:09.621 04:04:10 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:09.621 04:04:10 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:09.621 04:04:10 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:09.621 04:04:10 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:09.621 ************************************ 00:08:09.621 END TEST filesystem_xfs 00:08:09.621 ************************************ 00:08:09.621 00:08:09.621 real 0m3.277s 00:08:09.621 user 0m0.024s 00:08:09.621 sys 0m0.057s 00:08:09.621 04:04:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:09.621 04:04:10 -- common/autotest_common.sh@10 -- # set +x 00:08:09.621 04:04:10 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:09.621 04:04:11 -- target/filesystem.sh@93 -- # sync 00:08:09.621 04:04:11 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:09.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.621 04:04:11 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:09.621 04:04:11 -- common/autotest_common.sh@1208 -- # local i=0 00:08:09.621 04:04:11 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:09.621 04:04:11 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.621 04:04:11 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:09.621 04:04:11 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.621 04:04:11 -- common/autotest_common.sh@1220 -- # return 0 00:08:09.621 04:04:11 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:09.621 04:04:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.621 04:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:09.621 04:04:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.621 04:04:11 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:09.621 04:04:11 -- target/filesystem.sh@101 -- # killprocess 72566 00:08:09.621 04:04:11 -- common/autotest_common.sh@936 -- # '[' -z 72566 ']' 00:08:09.621 04:04:11 -- common/autotest_common.sh@940 -- # kill -0 72566 00:08:09.621 04:04:11 -- common/autotest_common.sh@941 -- # uname 00:08:09.621 04:04:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:09.621 04:04:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72566 00:08:09.621 killing process with pid 72566 00:08:09.621 04:04:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:09.621 04:04:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:09.621 04:04:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72566' 00:08:09.621 04:04:11 -- common/autotest_common.sh@955 -- # kill 72566 00:08:09.621 04:04:11 -- common/autotest_common.sh@960 -- # wait 72566 00:08:09.880 04:04:11 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:09.880 00:08:09.880 real 0m14.684s 00:08:09.880 user 0m56.847s 00:08:09.880 sys 0m1.649s 00:08:09.880 04:04:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:09.880 04:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:09.880 ************************************ 00:08:09.880 END TEST nvmf_filesystem_no_in_capsule 00:08:09.880 ************************************ 00:08:09.880 04:04:11 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:09.880 04:04:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:09.880 04:04:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.880 04:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:09.880 ************************************ 00:08:09.880 START TEST nvmf_filesystem_in_capsule 00:08:09.880 ************************************ 00:08:09.880 04:04:11 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:08:09.880 04:04:11 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:09.880 04:04:11 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:09.880 04:04:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:09.880 04:04:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:09.880 04:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:09.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.880 04:04:11 -- nvmf/common.sh@469 -- # nvmfpid=72938 00:08:09.880 04:04:11 -- nvmf/common.sh@470 -- # waitforlisten 72938 00:08:09.880 04:04:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:09.880 04:04:11 -- common/autotest_common.sh@829 -- # '[' -z 72938 ']' 00:08:09.880 04:04:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.880 04:04:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:09.880 04:04:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.880 04:04:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:09.880 04:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:10.139 [2024-11-26 04:04:11.662690] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:10.139 [2024-11-26 04:04:11.662821] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.139 [2024-11-26 04:04:11.795151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.139 [2024-11-26 04:04:11.854087] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:10.139 [2024-11-26 04:04:11.854608] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.139 [2024-11-26 04:04:11.854783] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.139 [2024-11-26 04:04:11.854965] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.139 [2024-11-26 04:04:11.855234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.139 [2024-11-26 04:04:11.855464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.139 [2024-11-26 04:04:11.855366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.139 [2024-11-26 04:04:11.856240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:11.076 04:04:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:11.076 04:04:12 -- common/autotest_common.sh@862 -- # return 0 00:08:11.076 04:04:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:11.076 04:04:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:11.076 04:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:11.076 04:04:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.076 04:04:12 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:11.076 04:04:12 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:11.076 04:04:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.076 04:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:11.076 [2024-11-26 04:04:12.756377] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.076 04:04:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.076 04:04:12 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:11.076 04:04:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.076 04:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:11.335 Malloc1 00:08:11.335 04:04:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.335 04:04:12 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:11.335 04:04:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.335 04:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:11.336 04:04:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.336 04:04:12 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:11.336 04:04:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.336 04:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:11.336 04:04:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.336 04:04:12 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:11.336 04:04:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.336 04:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:11.336 [2024-11-26 04:04:12.947758] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.336 04:04:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.336 04:04:12 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:11.336 04:04:12 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:11.336 04:04:12 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:11.336 04:04:12 -- common/autotest_common.sh@1369 -- # local bs 00:08:11.336 04:04:12 -- common/autotest_common.sh@1370 -- # local nb 00:08:11.336 04:04:12 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:11.336 04:04:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.336 04:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:11.336 04:04:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.336 04:04:12 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:11.336 { 00:08:11.336 "aliases": [ 00:08:11.336 "8cf4e226-55a6-48f6-9092-00445b6cd126" 00:08:11.336 ], 00:08:11.336 "assigned_rate_limits": { 00:08:11.336 "r_mbytes_per_sec": 0, 00:08:11.336 "rw_ios_per_sec": 0, 00:08:11.336 "rw_mbytes_per_sec": 0, 00:08:11.336 "w_mbytes_per_sec": 0 00:08:11.336 }, 00:08:11.336 "block_size": 512, 00:08:11.336 "claim_type": "exclusive_write", 00:08:11.336 "claimed": true, 00:08:11.336 "driver_specific": {}, 00:08:11.336 "memory_domains": [ 00:08:11.336 { 00:08:11.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.336 "dma_device_type": 2 00:08:11.336 } 00:08:11.336 ], 00:08:11.336 "name": "Malloc1", 00:08:11.336 "num_blocks": 1048576, 00:08:11.336 "product_name": "Malloc disk", 00:08:11.336 "supported_io_types": { 00:08:11.336 "abort": true, 00:08:11.336 "compare": false, 00:08:11.336 "compare_and_write": false, 00:08:11.336 "flush": true, 00:08:11.336 "nvme_admin": false, 00:08:11.336 "nvme_io": false, 00:08:11.336 "read": true, 00:08:11.336 "reset": true, 00:08:11.336 "unmap": true, 00:08:11.336 "write": true, 00:08:11.336 "write_zeroes": true 00:08:11.336 }, 00:08:11.336 "uuid": "8cf4e226-55a6-48f6-9092-00445b6cd126", 00:08:11.336 "zoned": false 00:08:11.336 } 00:08:11.336 ]' 00:08:11.336 04:04:12 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:11.336 04:04:13 -- common/autotest_common.sh@1372 -- # bs=512 00:08:11.336 04:04:13 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:11.336 04:04:13 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:11.336 04:04:13 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:11.336 04:04:13 -- common/autotest_common.sh@1377 -- # echo 512 00:08:11.336 04:04:13 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:11.336 04:04:13 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:11.595 04:04:13 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:11.595 04:04:13 -- common/autotest_common.sh@1187 -- # local i=0 00:08:11.595 04:04:13 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:11.595 04:04:13 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:11.595 04:04:13 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:13.500 04:04:15 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:13.500 04:04:15 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:13.500 04:04:15 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:13.500 04:04:15 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:13.500 04:04:15 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:13.500 04:04:15 -- common/autotest_common.sh@1197 -- # return 0 00:08:13.500 04:04:15 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:13.500 04:04:15 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:13.759 04:04:15 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:13.759 04:04:15 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:13.759 04:04:15 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:13.759 04:04:15 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:13.759 04:04:15 -- setup/common.sh@80 -- # echo 536870912 00:08:13.759 04:04:15 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:13.759 04:04:15 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:13.759 04:04:15 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:13.759 04:04:15 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:13.759 04:04:15 -- target/filesystem.sh@69 -- # partprobe 00:08:13.759 04:04:15 -- target/filesystem.sh@70 -- # sleep 1 00:08:14.694 04:04:16 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:14.694 04:04:16 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:14.694 04:04:16 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:14.694 04:04:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.694 04:04:16 -- common/autotest_common.sh@10 -- # set +x 00:08:14.694 ************************************ 00:08:14.694 START TEST filesystem_in_capsule_ext4 00:08:14.694 ************************************ 00:08:14.694 04:04:16 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:14.694 04:04:16 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:14.694 04:04:16 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:14.694 04:04:16 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:14.694 04:04:16 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:14.694 04:04:16 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:14.694 04:04:16 -- common/autotest_common.sh@914 -- # local i=0 00:08:14.694 04:04:16 -- common/autotest_common.sh@915 -- # local force 00:08:14.694 04:04:16 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:14.694 04:04:16 -- common/autotest_common.sh@918 -- # force=-F 00:08:14.694 04:04:16 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:14.694 mke2fs 1.47.0 (5-Feb-2023) 00:08:14.953 Discarding device blocks: 0/522240 done 00:08:14.953 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:14.953 Filesystem UUID: fb0a217d-0c3a-45a0-80a2-2e5173362c41 00:08:14.953 Superblock backups stored on blocks: 00:08:14.953 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:14.953 00:08:14.953 Allocating group tables: 0/64 done 00:08:14.953 Writing inode tables: 0/64 done 00:08:14.953 Creating journal (8192 blocks): done 00:08:14.953 Writing superblocks and filesystem accounting information: 0/64 done 00:08:14.953 00:08:14.953 04:04:16 -- common/autotest_common.sh@931 -- # return 0 00:08:14.953 04:04:16 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:20.224 04:04:21 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:20.224 04:04:21 -- target/filesystem.sh@25 -- # sync 00:08:20.224 04:04:21 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:20.224 04:04:21 -- target/filesystem.sh@27 -- # sync 00:08:20.224 04:04:21 -- target/filesystem.sh@29 -- # i=0 00:08:20.224 04:04:21 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:20.224 04:04:21 -- target/filesystem.sh@37 -- # kill -0 72938 00:08:20.224 04:04:21 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:20.224 04:04:21 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:20.483 04:04:21 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:20.483 04:04:21 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:20.483 ************************************ 00:08:20.483 END TEST filesystem_in_capsule_ext4 00:08:20.483 ************************************ 00:08:20.483 00:08:20.483 real 0m5.616s 00:08:20.483 user 0m0.022s 00:08:20.483 sys 0m0.073s 00:08:20.483 04:04:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:20.483 04:04:22 -- common/autotest_common.sh@10 -- # set +x 00:08:20.483 04:04:22 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:20.483 04:04:22 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:20.483 04:04:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.483 04:04:22 -- common/autotest_common.sh@10 -- # set +x 00:08:20.483 ************************************ 00:08:20.483 START TEST filesystem_in_capsule_btrfs 00:08:20.483 ************************************ 00:08:20.483 04:04:22 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:20.483 04:04:22 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:20.483 04:04:22 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:20.483 04:04:22 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:20.483 04:04:22 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:20.483 04:04:22 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:20.483 04:04:22 -- common/autotest_common.sh@914 -- # local i=0 00:08:20.483 04:04:22 -- common/autotest_common.sh@915 -- # local force 00:08:20.483 04:04:22 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:20.483 04:04:22 -- common/autotest_common.sh@920 -- # force=-f 00:08:20.483 04:04:22 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:20.742 btrfs-progs v6.8.1 00:08:20.742 See https://btrfs.readthedocs.io for more information. 00:08:20.742 00:08:20.742 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:20.742 NOTE: several default settings have changed in version 5.15, please make sure 00:08:20.742 this does not affect your deployments: 00:08:20.742 - DUP for metadata (-m dup) 00:08:20.742 - enabled no-holes (-O no-holes) 00:08:20.742 - enabled free-space-tree (-R free-space-tree) 00:08:20.742 00:08:20.742 Label: (null) 00:08:20.742 UUID: e2408d8d-7b7e-4ae7-b512-5b97bd7900c6 00:08:20.742 Node size: 16384 00:08:20.742 Sector size: 4096 (CPU page size: 4096) 00:08:20.742 Filesystem size: 510.00MiB 00:08:20.742 Block group profiles: 00:08:20.742 Data: single 8.00MiB 00:08:20.742 Metadata: DUP 32.00MiB 00:08:20.742 System: DUP 8.00MiB 00:08:20.742 SSD detected: yes 00:08:20.742 Zoned device: no 00:08:20.742 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:20.742 Checksum: crc32c 00:08:20.742 Number of devices: 1 00:08:20.742 Devices: 00:08:20.742 ID SIZE PATH 00:08:20.742 1 510.00MiB /dev/nvme0n1p1 00:08:20.742 00:08:20.742 04:04:22 -- common/autotest_common.sh@931 -- # return 0 00:08:20.742 04:04:22 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:20.742 04:04:22 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:20.742 04:04:22 -- target/filesystem.sh@25 -- # sync 00:08:20.742 04:04:22 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:20.742 04:04:22 -- target/filesystem.sh@27 -- # sync 00:08:20.742 04:04:22 -- target/filesystem.sh@29 -- # i=0 00:08:20.742 04:04:22 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:20.742 04:04:22 -- target/filesystem.sh@37 -- # kill -0 72938 00:08:20.742 04:04:22 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:20.742 04:04:22 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:20.742 04:04:22 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:20.742 04:04:22 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:20.742 ************************************ 00:08:20.742 END TEST filesystem_in_capsule_btrfs 00:08:20.742 ************************************ 00:08:20.742 00:08:20.742 real 0m0.295s 00:08:20.742 user 0m0.019s 00:08:20.742 sys 0m0.065s 00:08:20.742 04:04:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:20.742 04:04:22 -- common/autotest_common.sh@10 -- # set +x 00:08:20.742 04:04:22 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:20.742 04:04:22 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:20.742 04:04:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.742 04:04:22 -- common/autotest_common.sh@10 -- # set +x 00:08:20.742 ************************************ 00:08:20.742 START TEST filesystem_in_capsule_xfs 00:08:20.742 ************************************ 00:08:20.742 04:04:22 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:20.742 04:04:22 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:20.742 04:04:22 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:20.742 04:04:22 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:20.742 04:04:22 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:20.742 04:04:22 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:20.742 04:04:22 -- common/autotest_common.sh@914 -- # local i=0 00:08:20.742 04:04:22 -- common/autotest_common.sh@915 -- # local force 00:08:20.742 04:04:22 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:20.742 04:04:22 -- common/autotest_common.sh@920 -- # force=-f 00:08:20.742 04:04:22 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:21.001 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:21.001 = sectsz=512 attr=2, projid32bit=1 00:08:21.001 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:21.001 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:21.001 data = bsize=4096 blocks=130560, imaxpct=25 00:08:21.001 = sunit=0 swidth=0 blks 00:08:21.001 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:21.001 log =internal log bsize=4096 blocks=16384, version=2 00:08:21.001 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:21.001 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:21.569 Discarding blocks...Done. 00:08:21.569 04:04:23 -- common/autotest_common.sh@931 -- # return 0 00:08:21.569 04:04:23 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:23.472 04:04:24 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:23.472 04:04:24 -- target/filesystem.sh@25 -- # sync 00:08:23.472 04:04:25 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:23.472 04:04:25 -- target/filesystem.sh@27 -- # sync 00:08:23.472 04:04:25 -- target/filesystem.sh@29 -- # i=0 00:08:23.472 04:04:25 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:23.472 04:04:25 -- target/filesystem.sh@37 -- # kill -0 72938 00:08:23.472 04:04:25 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:23.472 04:04:25 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:23.472 04:04:25 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:23.472 04:04:25 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:23.472 ************************************ 00:08:23.472 END TEST filesystem_in_capsule_xfs 00:08:23.472 ************************************ 00:08:23.472 00:08:23.472 real 0m2.632s 00:08:23.472 user 0m0.023s 00:08:23.472 sys 0m0.060s 00:08:23.472 04:04:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:23.472 04:04:25 -- common/autotest_common.sh@10 -- # set +x 00:08:23.472 04:04:25 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:23.472 04:04:25 -- target/filesystem.sh@93 -- # sync 00:08:23.472 04:04:25 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:23.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.472 04:04:25 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:23.472 04:04:25 -- common/autotest_common.sh@1208 -- # local i=0 00:08:23.472 04:04:25 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:23.472 04:04:25 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:23.472 04:04:25 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:23.472 04:04:25 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:23.472 04:04:25 -- common/autotest_common.sh@1220 -- # return 0 00:08:23.472 04:04:25 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:23.472 04:04:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.472 04:04:25 -- common/autotest_common.sh@10 -- # set +x 00:08:23.472 04:04:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.472 04:04:25 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:23.472 04:04:25 -- target/filesystem.sh@101 -- # killprocess 72938 00:08:23.472 04:04:25 -- common/autotest_common.sh@936 -- # '[' -z 72938 ']' 00:08:23.472 04:04:25 -- common/autotest_common.sh@940 -- # kill -0 72938 00:08:23.472 04:04:25 -- common/autotest_common.sh@941 -- # uname 00:08:23.472 04:04:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:23.472 04:04:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72938 00:08:23.731 killing process with pid 72938 00:08:23.731 04:04:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:23.731 04:04:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:23.731 04:04:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72938' 00:08:23.731 04:04:25 -- common/autotest_common.sh@955 -- # kill 72938 00:08:23.731 04:04:25 -- common/autotest_common.sh@960 -- # wait 72938 00:08:24.300 04:04:25 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:24.300 00:08:24.300 real 0m14.193s 00:08:24.300 user 0m54.843s 00:08:24.300 sys 0m1.628s 00:08:24.300 04:04:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:24.300 04:04:25 -- common/autotest_common.sh@10 -- # set +x 00:08:24.300 ************************************ 00:08:24.300 END TEST nvmf_filesystem_in_capsule 00:08:24.300 ************************************ 00:08:24.300 04:04:25 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:24.300 04:04:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:24.300 04:04:25 -- nvmf/common.sh@116 -- # sync 00:08:24.300 04:04:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:24.300 04:04:25 -- nvmf/common.sh@119 -- # set +e 00:08:24.300 04:04:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:24.300 04:04:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:24.300 rmmod nvme_tcp 00:08:24.300 rmmod nvme_fabrics 00:08:24.300 rmmod nvme_keyring 00:08:24.300 04:04:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:24.300 04:04:25 -- nvmf/common.sh@123 -- # set -e 00:08:24.300 04:04:25 -- nvmf/common.sh@124 -- # return 0 00:08:24.300 04:04:25 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:24.300 04:04:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:24.300 04:04:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:24.300 04:04:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:24.300 04:04:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:24.300 04:04:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:24.300 04:04:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.300 04:04:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.300 04:04:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.300 04:04:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:24.300 00:08:24.300 real 0m29.885s 00:08:24.300 user 1m52.067s 00:08:24.300 sys 0m3.715s 00:08:24.300 04:04:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:24.300 04:04:25 -- common/autotest_common.sh@10 -- # set +x 00:08:24.300 ************************************ 00:08:24.300 END TEST nvmf_filesystem 00:08:24.300 ************************************ 00:08:24.300 04:04:26 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:24.300 04:04:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:24.300 04:04:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:24.300 04:04:26 -- common/autotest_common.sh@10 -- # set +x 00:08:24.300 ************************************ 00:08:24.300 START TEST nvmf_discovery 00:08:24.300 ************************************ 00:08:24.300 04:04:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:24.560 * Looking for test storage... 00:08:24.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:24.560 04:04:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:24.560 04:04:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:24.560 04:04:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:24.560 04:04:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:24.560 04:04:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:24.560 04:04:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:24.560 04:04:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:24.560 04:04:26 -- scripts/common.sh@335 -- # IFS=.-: 00:08:24.560 04:04:26 -- scripts/common.sh@335 -- # read -ra ver1 00:08:24.560 04:04:26 -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.560 04:04:26 -- scripts/common.sh@336 -- # read -ra ver2 00:08:24.560 04:04:26 -- scripts/common.sh@337 -- # local 'op=<' 00:08:24.560 04:04:26 -- scripts/common.sh@339 -- # ver1_l=2 00:08:24.560 04:04:26 -- scripts/common.sh@340 -- # ver2_l=1 00:08:24.560 04:04:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:24.560 04:04:26 -- scripts/common.sh@343 -- # case "$op" in 00:08:24.560 04:04:26 -- scripts/common.sh@344 -- # : 1 00:08:24.560 04:04:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:24.560 04:04:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.560 04:04:26 -- scripts/common.sh@364 -- # decimal 1 00:08:24.560 04:04:26 -- scripts/common.sh@352 -- # local d=1 00:08:24.560 04:04:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.560 04:04:26 -- scripts/common.sh@354 -- # echo 1 00:08:24.560 04:04:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:24.560 04:04:26 -- scripts/common.sh@365 -- # decimal 2 00:08:24.560 04:04:26 -- scripts/common.sh@352 -- # local d=2 00:08:24.560 04:04:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:24.560 04:04:26 -- scripts/common.sh@354 -- # echo 2 00:08:24.560 04:04:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:24.560 04:04:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:24.560 04:04:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:24.560 04:04:26 -- scripts/common.sh@367 -- # return 0 00:08:24.560 04:04:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:24.560 04:04:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:24.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.560 --rc genhtml_branch_coverage=1 00:08:24.560 --rc genhtml_function_coverage=1 00:08:24.560 --rc genhtml_legend=1 00:08:24.560 --rc geninfo_all_blocks=1 00:08:24.560 --rc geninfo_unexecuted_blocks=1 00:08:24.560 00:08:24.560 ' 00:08:24.560 04:04:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:24.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.560 --rc genhtml_branch_coverage=1 00:08:24.560 --rc genhtml_function_coverage=1 00:08:24.560 --rc genhtml_legend=1 00:08:24.560 --rc geninfo_all_blocks=1 00:08:24.560 --rc geninfo_unexecuted_blocks=1 00:08:24.560 00:08:24.560 ' 00:08:24.560 04:04:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:24.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.560 --rc genhtml_branch_coverage=1 00:08:24.560 --rc genhtml_function_coverage=1 00:08:24.560 --rc genhtml_legend=1 00:08:24.560 --rc geninfo_all_blocks=1 00:08:24.560 --rc geninfo_unexecuted_blocks=1 00:08:24.560 00:08:24.560 ' 00:08:24.560 04:04:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:24.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.560 --rc genhtml_branch_coverage=1 00:08:24.560 --rc genhtml_function_coverage=1 00:08:24.560 --rc genhtml_legend=1 00:08:24.560 --rc geninfo_all_blocks=1 00:08:24.560 --rc geninfo_unexecuted_blocks=1 00:08:24.560 00:08:24.560 ' 00:08:24.560 04:04:26 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:24.560 04:04:26 -- nvmf/common.sh@7 -- # uname -s 00:08:24.560 04:04:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.560 04:04:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.560 04:04:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.560 04:04:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.560 04:04:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.560 04:04:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.560 04:04:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.560 04:04:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.560 04:04:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.560 04:04:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.560 04:04:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:08:24.560 04:04:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:08:24.560 04:04:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.560 04:04:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.560 04:04:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:24.560 04:04:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:24.560 04:04:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.560 04:04:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.560 04:04:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.560 04:04:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.560 04:04:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.560 04:04:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.560 04:04:26 -- paths/export.sh@5 -- # export PATH 00:08:24.560 04:04:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.560 04:04:26 -- nvmf/common.sh@46 -- # : 0 00:08:24.560 04:04:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:24.560 04:04:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:24.560 04:04:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:24.560 04:04:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.560 04:04:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.561 04:04:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:24.561 04:04:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:24.561 04:04:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:24.561 04:04:26 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:24.561 04:04:26 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:24.561 04:04:26 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:24.561 04:04:26 -- target/discovery.sh@15 -- # hash nvme 00:08:24.561 04:04:26 -- target/discovery.sh@20 -- # nvmftestinit 00:08:24.561 04:04:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:24.561 04:04:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.561 04:04:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:24.561 04:04:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:24.561 04:04:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:24.561 04:04:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.561 04:04:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.561 04:04:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.561 04:04:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:24.561 04:04:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:24.561 04:04:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:24.561 04:04:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:24.561 04:04:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:24.561 04:04:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:24.561 04:04:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:24.561 04:04:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:24.561 04:04:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:24.561 04:04:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:24.561 04:04:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:24.561 04:04:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:24.561 04:04:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:24.561 04:04:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:24.561 04:04:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:24.561 04:04:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:24.561 04:04:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:24.561 04:04:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:24.561 04:04:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:24.561 04:04:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:24.561 Cannot find device "nvmf_tgt_br" 00:08:24.561 04:04:26 -- nvmf/common.sh@154 -- # true 00:08:24.561 04:04:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:24.561 Cannot find device "nvmf_tgt_br2" 00:08:24.561 04:04:26 -- nvmf/common.sh@155 -- # true 00:08:24.561 04:04:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:24.561 04:04:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:24.561 Cannot find device "nvmf_tgt_br" 00:08:24.561 04:04:26 -- nvmf/common.sh@157 -- # true 00:08:24.561 04:04:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:24.821 Cannot find device "nvmf_tgt_br2" 00:08:24.821 04:04:26 -- nvmf/common.sh@158 -- # true 00:08:24.821 04:04:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:24.821 04:04:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:24.821 04:04:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:24.821 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:24.821 04:04:26 -- nvmf/common.sh@161 -- # true 00:08:24.821 04:04:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:24.821 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:24.821 04:04:26 -- nvmf/common.sh@162 -- # true 00:08:24.821 04:04:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:24.821 04:04:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:24.821 04:04:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:24.821 04:04:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:24.821 04:04:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:24.821 04:04:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:24.821 04:04:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:24.821 04:04:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:24.821 04:04:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:24.821 04:04:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:24.821 04:04:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:24.821 04:04:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:24.821 04:04:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:24.821 04:04:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:24.821 04:04:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:24.821 04:04:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:24.821 04:04:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:24.821 04:04:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:24.821 04:04:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:24.821 04:04:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:24.821 04:04:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:24.821 04:04:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:24.821 04:04:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:24.821 04:04:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:24.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:24.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:08:24.821 00:08:24.821 --- 10.0.0.2 ping statistics --- 00:08:24.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.821 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:08:24.821 04:04:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:24.821 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:24.821 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:08:24.821 00:08:24.821 --- 10.0.0.3 ping statistics --- 00:08:24.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.821 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:08:24.821 04:04:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:24.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:24.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:24.821 00:08:24.821 --- 10.0.0.1 ping statistics --- 00:08:24.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.821 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:24.821 04:04:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:24.821 04:04:26 -- nvmf/common.sh@421 -- # return 0 00:08:24.821 04:04:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:24.821 04:04:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:24.821 04:04:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:24.821 04:04:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:24.821 04:04:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:24.821 04:04:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:24.821 04:04:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:25.081 04:04:26 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:25.081 04:04:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:25.081 04:04:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:25.081 04:04:26 -- common/autotest_common.sh@10 -- # set +x 00:08:25.081 04:04:26 -- nvmf/common.sh@469 -- # nvmfpid=73484 00:08:25.081 04:04:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:25.081 04:04:26 -- nvmf/common.sh@470 -- # waitforlisten 73484 00:08:25.081 04:04:26 -- common/autotest_common.sh@829 -- # '[' -z 73484 ']' 00:08:25.081 04:04:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.081 04:04:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.081 04:04:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.081 04:04:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.081 04:04:26 -- common/autotest_common.sh@10 -- # set +x 00:08:25.081 [2024-11-26 04:04:26.661550] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:25.081 [2024-11-26 04:04:26.661627] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.081 [2024-11-26 04:04:26.798978] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.341 [2024-11-26 04:04:26.873279] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:25.341 [2024-11-26 04:04:26.873608] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.341 [2024-11-26 04:04:26.873700] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.341 [2024-11-26 04:04:26.873809] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.341 [2024-11-26 04:04:26.874030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.341 [2024-11-26 04:04:26.874143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.341 [2024-11-26 04:04:26.874531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.341 [2024-11-26 04:04:26.874535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.910 04:04:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:25.910 04:04:27 -- common/autotest_common.sh@862 -- # return 0 00:08:25.910 04:04:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:25.910 04:04:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:25.910 04:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:25.910 04:04:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.910 04:04:27 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:25.910 04:04:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.910 04:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:25.910 [2024-11-26 04:04:27.640338] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:25.910 04:04:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.910 04:04:27 -- target/discovery.sh@26 -- # seq 1 4 00:08:26.169 04:04:27 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:26.169 04:04:27 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:26.169 04:04:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.169 04:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:26.169 Null1 00:08:26.169 04:04:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.169 04:04:27 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:26.169 04:04:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.169 04:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:26.169 04:04:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.169 04:04:27 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:26.169 04:04:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.169 04:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:26.169 04:04:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.169 04:04:27 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:26.169 04:04:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.169 04:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:26.169 [2024-11-26 04:04:27.707528] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.169 04:04:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.169 04:04:27 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:26.169 04:04:27 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:26.169 04:04:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.169 04:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:26.169 Null2 00:08:26.169 04:04:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.169 04:04:27 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:26.170 04:04:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.170 04:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:26.170 04:04:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.170 04:04:27 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:26.170 04:04:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.170 04:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:26.170 04:04:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.170 04:04:27 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:26.170 04:04:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.170 04:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:26.170 04:04:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.170 04:04:27 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:26.170 04:04:27 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:26.170 04:04:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.170 04:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:26.170 Null3 00:08:26.170 04:04:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.170 04:04:27 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:26.170 04:04:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.170 04:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:26.170 04:04:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.170 04:04:27 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:26.170 04:04:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.170 04:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:26.170 04:04:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.170 04:04:27 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:26.170 04:04:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.170 04:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:26.170 04:04:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.170 04:04:27 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:26.170 04:04:27 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:26.170 04:04:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.170 04:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:26.170 Null4 00:08:26.170 04:04:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.170 04:04:27 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:26.170 04:04:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.170 04:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:26.170 04:04:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.170 04:04:27 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:26.170 04:04:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.170 04:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:26.170 04:04:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.170 04:04:27 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:26.170 04:04:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.170 04:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:26.170 04:04:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.170 04:04:27 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:26.170 04:04:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.170 04:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:26.170 04:04:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.170 04:04:27 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:26.170 04:04:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.170 04:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:26.170 04:04:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.170 04:04:27 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -a 10.0.0.2 -s 4420 00:08:26.430 00:08:26.430 Discovery Log Number of Records 6, Generation counter 6 00:08:26.430 =====Discovery Log Entry 0====== 00:08:26.430 trtype: tcp 00:08:26.430 adrfam: ipv4 00:08:26.430 subtype: current discovery subsystem 00:08:26.430 treq: not required 00:08:26.430 portid: 0 00:08:26.430 trsvcid: 4420 00:08:26.430 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:26.430 traddr: 10.0.0.2 00:08:26.430 eflags: explicit discovery connections, duplicate discovery information 00:08:26.430 sectype: none 00:08:26.430 =====Discovery Log Entry 1====== 00:08:26.430 trtype: tcp 00:08:26.430 adrfam: ipv4 00:08:26.430 subtype: nvme subsystem 00:08:26.430 treq: not required 00:08:26.430 portid: 0 00:08:26.430 trsvcid: 4420 00:08:26.430 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:26.430 traddr: 10.0.0.2 00:08:26.430 eflags: none 00:08:26.430 sectype: none 00:08:26.430 =====Discovery Log Entry 2====== 00:08:26.430 trtype: tcp 00:08:26.430 adrfam: ipv4 00:08:26.430 subtype: nvme subsystem 00:08:26.430 treq: not required 00:08:26.430 portid: 0 00:08:26.430 trsvcid: 4420 00:08:26.430 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:26.430 traddr: 10.0.0.2 00:08:26.430 eflags: none 00:08:26.430 sectype: none 00:08:26.430 =====Discovery Log Entry 3====== 00:08:26.430 trtype: tcp 00:08:26.430 adrfam: ipv4 00:08:26.430 subtype: nvme subsystem 00:08:26.430 treq: not required 00:08:26.430 portid: 0 00:08:26.430 trsvcid: 4420 00:08:26.430 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:26.430 traddr: 10.0.0.2 00:08:26.430 eflags: none 00:08:26.430 sectype: none 00:08:26.430 =====Discovery Log Entry 4====== 00:08:26.430 trtype: tcp 00:08:26.430 adrfam: ipv4 00:08:26.430 subtype: nvme subsystem 00:08:26.430 treq: not required 00:08:26.430 portid: 0 00:08:26.430 trsvcid: 4420 00:08:26.430 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:26.430 traddr: 10.0.0.2 00:08:26.430 eflags: none 00:08:26.430 sectype: none 00:08:26.430 =====Discovery Log Entry 5====== 00:08:26.430 trtype: tcp 00:08:26.430 adrfam: ipv4 00:08:26.430 subtype: discovery subsystem referral 00:08:26.430 treq: not required 00:08:26.430 portid: 0 00:08:26.430 trsvcid: 4430 00:08:26.430 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:26.430 traddr: 10.0.0.2 00:08:26.430 eflags: none 00:08:26.430 sectype: none 00:08:26.430 Perform nvmf subsystem discovery via RPC 00:08:26.430 04:04:27 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:26.430 04:04:27 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:26.430 04:04:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.430 04:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:26.430 [2024-11-26 04:04:27.951697] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:26.430 [ 00:08:26.430 { 00:08:26.430 "allow_any_host": true, 00:08:26.430 "hosts": [], 00:08:26.430 "listen_addresses": [ 00:08:26.430 { 00:08:26.430 "adrfam": "IPv4", 00:08:26.430 "traddr": "10.0.0.2", 00:08:26.430 "transport": "TCP", 00:08:26.430 "trsvcid": "4420", 00:08:26.430 "trtype": "TCP" 00:08:26.430 } 00:08:26.430 ], 00:08:26.430 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:26.430 "subtype": "Discovery" 00:08:26.430 }, 00:08:26.430 { 00:08:26.430 "allow_any_host": true, 00:08:26.430 "hosts": [], 00:08:26.430 "listen_addresses": [ 00:08:26.430 { 00:08:26.430 "adrfam": "IPv4", 00:08:26.430 "traddr": "10.0.0.2", 00:08:26.430 "transport": "TCP", 00:08:26.430 "trsvcid": "4420", 00:08:26.430 "trtype": "TCP" 00:08:26.430 } 00:08:26.430 ], 00:08:26.430 "max_cntlid": 65519, 00:08:26.430 "max_namespaces": 32, 00:08:26.430 "min_cntlid": 1, 00:08:26.430 "model_number": "SPDK bdev Controller", 00:08:26.430 "namespaces": [ 00:08:26.430 { 00:08:26.430 "bdev_name": "Null1", 00:08:26.430 "name": "Null1", 00:08:26.430 "nguid": "A6E81E167E5C4FCDB5B0DCE73CA24E10", 00:08:26.430 "nsid": 1, 00:08:26.430 "uuid": "a6e81e16-7e5c-4fcd-b5b0-dce73ca24e10" 00:08:26.430 } 00:08:26.430 ], 00:08:26.430 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:26.430 "serial_number": "SPDK00000000000001", 00:08:26.430 "subtype": "NVMe" 00:08:26.431 }, 00:08:26.431 { 00:08:26.431 "allow_any_host": true, 00:08:26.431 "hosts": [], 00:08:26.431 "listen_addresses": [ 00:08:26.431 { 00:08:26.431 "adrfam": "IPv4", 00:08:26.431 "traddr": "10.0.0.2", 00:08:26.431 "transport": "TCP", 00:08:26.431 "trsvcid": "4420", 00:08:26.431 "trtype": "TCP" 00:08:26.431 } 00:08:26.431 ], 00:08:26.431 "max_cntlid": 65519, 00:08:26.431 "max_namespaces": 32, 00:08:26.431 "min_cntlid": 1, 00:08:26.431 "model_number": "SPDK bdev Controller", 00:08:26.431 "namespaces": [ 00:08:26.431 { 00:08:26.431 "bdev_name": "Null2", 00:08:26.431 "name": "Null2", 00:08:26.431 "nguid": "8D241219EBFE4279AB593A56CEA53EAF", 00:08:26.431 "nsid": 1, 00:08:26.431 "uuid": "8d241219-ebfe-4279-ab59-3a56cea53eaf" 00:08:26.431 } 00:08:26.431 ], 00:08:26.431 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:26.431 "serial_number": "SPDK00000000000002", 00:08:26.431 "subtype": "NVMe" 00:08:26.431 }, 00:08:26.431 { 00:08:26.431 "allow_any_host": true, 00:08:26.431 "hosts": [], 00:08:26.431 "listen_addresses": [ 00:08:26.431 { 00:08:26.431 "adrfam": "IPv4", 00:08:26.431 "traddr": "10.0.0.2", 00:08:26.431 "transport": "TCP", 00:08:26.431 "trsvcid": "4420", 00:08:26.431 "trtype": "TCP" 00:08:26.431 } 00:08:26.431 ], 00:08:26.431 "max_cntlid": 65519, 00:08:26.431 "max_namespaces": 32, 00:08:26.431 "min_cntlid": 1, 00:08:26.431 "model_number": "SPDK bdev Controller", 00:08:26.431 "namespaces": [ 00:08:26.431 { 00:08:26.431 "bdev_name": "Null3", 00:08:26.431 "name": "Null3", 00:08:26.431 "nguid": "4B83B5A87A1942298B96647EFD3FA5AE", 00:08:26.431 "nsid": 1, 00:08:26.431 "uuid": "4b83b5a8-7a19-4229-8b96-647efd3fa5ae" 00:08:26.431 } 00:08:26.431 ], 00:08:26.431 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:26.431 "serial_number": "SPDK00000000000003", 00:08:26.431 "subtype": "NVMe" 00:08:26.431 }, 00:08:26.431 { 00:08:26.431 "allow_any_host": true, 00:08:26.431 "hosts": [], 00:08:26.431 "listen_addresses": [ 00:08:26.431 { 00:08:26.431 "adrfam": "IPv4", 00:08:26.431 "traddr": "10.0.0.2", 00:08:26.431 "transport": "TCP", 00:08:26.431 "trsvcid": "4420", 00:08:26.431 "trtype": "TCP" 00:08:26.431 } 00:08:26.431 ], 00:08:26.431 "max_cntlid": 65519, 00:08:26.431 "max_namespaces": 32, 00:08:26.431 "min_cntlid": 1, 00:08:26.431 "model_number": "SPDK bdev Controller", 00:08:26.431 "namespaces": [ 00:08:26.431 { 00:08:26.431 "bdev_name": "Null4", 00:08:26.431 "name": "Null4", 00:08:26.431 "nguid": "2AE5A1CE2507435DB024BBC87BB10FDB", 00:08:26.431 "nsid": 1, 00:08:26.431 "uuid": "2ae5a1ce-2507-435d-b024-bbc87bb10fdb" 00:08:26.431 } 00:08:26.431 ], 00:08:26.431 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:26.431 "serial_number": "SPDK00000000000004", 00:08:26.431 "subtype": "NVMe" 00:08:26.431 } 00:08:26.431 ] 00:08:26.431 04:04:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.431 04:04:27 -- target/discovery.sh@42 -- # seq 1 4 00:08:26.431 04:04:27 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:26.431 04:04:27 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:26.431 04:04:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.431 04:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:26.431 04:04:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.431 04:04:27 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:26.431 04:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.431 04:04:28 -- common/autotest_common.sh@10 -- # set +x 00:08:26.431 04:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.431 04:04:28 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:26.431 04:04:28 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:26.431 04:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.431 04:04:28 -- common/autotest_common.sh@10 -- # set +x 00:08:26.431 04:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.431 04:04:28 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:26.431 04:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.431 04:04:28 -- common/autotest_common.sh@10 -- # set +x 00:08:26.431 04:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.431 04:04:28 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:26.431 04:04:28 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:26.431 04:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.431 04:04:28 -- common/autotest_common.sh@10 -- # set +x 00:08:26.431 04:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.431 04:04:28 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:26.431 04:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.431 04:04:28 -- common/autotest_common.sh@10 -- # set +x 00:08:26.431 04:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.431 04:04:28 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:26.431 04:04:28 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:26.431 04:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.431 04:04:28 -- common/autotest_common.sh@10 -- # set +x 00:08:26.431 04:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.431 04:04:28 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:26.431 04:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.431 04:04:28 -- common/autotest_common.sh@10 -- # set +x 00:08:26.431 04:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.431 04:04:28 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:26.431 04:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.431 04:04:28 -- common/autotest_common.sh@10 -- # set +x 00:08:26.431 04:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.431 04:04:28 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:26.431 04:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.431 04:04:28 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:26.431 04:04:28 -- common/autotest_common.sh@10 -- # set +x 00:08:26.431 04:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.431 04:04:28 -- target/discovery.sh@49 -- # check_bdevs= 00:08:26.431 04:04:28 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:26.431 04:04:28 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:26.431 04:04:28 -- target/discovery.sh@57 -- # nvmftestfini 00:08:26.431 04:04:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:26.431 04:04:28 -- nvmf/common.sh@116 -- # sync 00:08:26.431 04:04:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:26.431 04:04:28 -- nvmf/common.sh@119 -- # set +e 00:08:26.431 04:04:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:26.431 04:04:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:26.431 rmmod nvme_tcp 00:08:26.431 rmmod nvme_fabrics 00:08:26.431 rmmod nvme_keyring 00:08:26.431 04:04:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:26.691 04:04:28 -- nvmf/common.sh@123 -- # set -e 00:08:26.691 04:04:28 -- nvmf/common.sh@124 -- # return 0 00:08:26.691 04:04:28 -- nvmf/common.sh@477 -- # '[' -n 73484 ']' 00:08:26.691 04:04:28 -- nvmf/common.sh@478 -- # killprocess 73484 00:08:26.691 04:04:28 -- common/autotest_common.sh@936 -- # '[' -z 73484 ']' 00:08:26.691 04:04:28 -- common/autotest_common.sh@940 -- # kill -0 73484 00:08:26.691 04:04:28 -- common/autotest_common.sh@941 -- # uname 00:08:26.691 04:04:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:26.691 04:04:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73484 00:08:26.691 killing process with pid 73484 00:08:26.691 04:04:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:26.691 04:04:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:26.691 04:04:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73484' 00:08:26.691 04:04:28 -- common/autotest_common.sh@955 -- # kill 73484 00:08:26.691 [2024-11-26 04:04:28.235174] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:26.691 04:04:28 -- common/autotest_common.sh@960 -- # wait 73484 00:08:26.950 04:04:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:26.950 04:04:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:26.950 04:04:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:26.950 04:04:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:26.950 04:04:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:26.950 04:04:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.951 04:04:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:26.951 04:04:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.951 04:04:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:26.951 00:08:26.951 real 0m2.485s 00:08:26.951 user 0m6.700s 00:08:26.951 sys 0m0.663s 00:08:26.951 04:04:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:26.951 04:04:28 -- common/autotest_common.sh@10 -- # set +x 00:08:26.951 ************************************ 00:08:26.951 END TEST nvmf_discovery 00:08:26.951 ************************************ 00:08:26.951 04:04:28 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:26.951 04:04:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:26.951 04:04:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:26.951 04:04:28 -- common/autotest_common.sh@10 -- # set +x 00:08:26.951 ************************************ 00:08:26.951 START TEST nvmf_referrals 00:08:26.951 ************************************ 00:08:26.951 04:04:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:26.951 * Looking for test storage... 00:08:26.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:26.951 04:04:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:26.951 04:04:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:26.951 04:04:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:27.211 04:04:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:27.211 04:04:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:27.211 04:04:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:27.211 04:04:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:27.211 04:04:28 -- scripts/common.sh@335 -- # IFS=.-: 00:08:27.211 04:04:28 -- scripts/common.sh@335 -- # read -ra ver1 00:08:27.211 04:04:28 -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.211 04:04:28 -- scripts/common.sh@336 -- # read -ra ver2 00:08:27.211 04:04:28 -- scripts/common.sh@337 -- # local 'op=<' 00:08:27.211 04:04:28 -- scripts/common.sh@339 -- # ver1_l=2 00:08:27.211 04:04:28 -- scripts/common.sh@340 -- # ver2_l=1 00:08:27.211 04:04:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:27.211 04:04:28 -- scripts/common.sh@343 -- # case "$op" in 00:08:27.211 04:04:28 -- scripts/common.sh@344 -- # : 1 00:08:27.211 04:04:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:27.211 04:04:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.211 04:04:28 -- scripts/common.sh@364 -- # decimal 1 00:08:27.211 04:04:28 -- scripts/common.sh@352 -- # local d=1 00:08:27.211 04:04:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.211 04:04:28 -- scripts/common.sh@354 -- # echo 1 00:08:27.211 04:04:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:27.211 04:04:28 -- scripts/common.sh@365 -- # decimal 2 00:08:27.211 04:04:28 -- scripts/common.sh@352 -- # local d=2 00:08:27.211 04:04:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.211 04:04:28 -- scripts/common.sh@354 -- # echo 2 00:08:27.211 04:04:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:27.211 04:04:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:27.211 04:04:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:27.211 04:04:28 -- scripts/common.sh@367 -- # return 0 00:08:27.211 04:04:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.211 04:04:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:27.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.211 --rc genhtml_branch_coverage=1 00:08:27.211 --rc genhtml_function_coverage=1 00:08:27.211 --rc genhtml_legend=1 00:08:27.211 --rc geninfo_all_blocks=1 00:08:27.211 --rc geninfo_unexecuted_blocks=1 00:08:27.211 00:08:27.211 ' 00:08:27.211 04:04:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:27.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.211 --rc genhtml_branch_coverage=1 00:08:27.211 --rc genhtml_function_coverage=1 00:08:27.211 --rc genhtml_legend=1 00:08:27.211 --rc geninfo_all_blocks=1 00:08:27.211 --rc geninfo_unexecuted_blocks=1 00:08:27.211 00:08:27.211 ' 00:08:27.211 04:04:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:27.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.211 --rc genhtml_branch_coverage=1 00:08:27.211 --rc genhtml_function_coverage=1 00:08:27.211 --rc genhtml_legend=1 00:08:27.211 --rc geninfo_all_blocks=1 00:08:27.211 --rc geninfo_unexecuted_blocks=1 00:08:27.211 00:08:27.211 ' 00:08:27.211 04:04:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:27.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.211 --rc genhtml_branch_coverage=1 00:08:27.211 --rc genhtml_function_coverage=1 00:08:27.211 --rc genhtml_legend=1 00:08:27.211 --rc geninfo_all_blocks=1 00:08:27.211 --rc geninfo_unexecuted_blocks=1 00:08:27.211 00:08:27.211 ' 00:08:27.211 04:04:28 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:27.211 04:04:28 -- nvmf/common.sh@7 -- # uname -s 00:08:27.211 04:04:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.211 04:04:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.211 04:04:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.211 04:04:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.211 04:04:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.211 04:04:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.211 04:04:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.211 04:04:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.211 04:04:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.211 04:04:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.211 04:04:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:08:27.211 04:04:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:08:27.211 04:04:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.211 04:04:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.211 04:04:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:27.211 04:04:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:27.211 04:04:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.211 04:04:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.211 04:04:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.211 04:04:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.211 04:04:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.211 04:04:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.211 04:04:28 -- paths/export.sh@5 -- # export PATH 00:08:27.211 04:04:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.211 04:04:28 -- nvmf/common.sh@46 -- # : 0 00:08:27.211 04:04:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:27.211 04:04:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:27.211 04:04:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:27.211 04:04:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.211 04:04:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.211 04:04:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:27.211 04:04:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:27.211 04:04:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:27.211 04:04:28 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:27.211 04:04:28 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:27.211 04:04:28 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:27.211 04:04:28 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:27.211 04:04:28 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:27.211 04:04:28 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:27.211 04:04:28 -- target/referrals.sh@37 -- # nvmftestinit 00:08:27.211 04:04:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:27.211 04:04:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.211 04:04:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:27.211 04:04:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:27.211 04:04:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:27.211 04:04:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.211 04:04:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.211 04:04:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.211 04:04:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:27.211 04:04:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:27.211 04:04:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:27.211 04:04:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:27.211 04:04:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:27.211 04:04:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:27.212 04:04:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.212 04:04:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.212 04:04:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:27.212 04:04:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:27.212 04:04:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:27.212 04:04:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:27.212 04:04:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:27.212 04:04:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.212 04:04:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:27.212 04:04:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:27.212 04:04:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:27.212 04:04:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:27.212 04:04:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:27.212 04:04:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:27.212 Cannot find device "nvmf_tgt_br" 00:08:27.212 04:04:28 -- nvmf/common.sh@154 -- # true 00:08:27.212 04:04:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:27.212 Cannot find device "nvmf_tgt_br2" 00:08:27.212 04:04:28 -- nvmf/common.sh@155 -- # true 00:08:27.212 04:04:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:27.212 04:04:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:27.212 Cannot find device "nvmf_tgt_br" 00:08:27.212 04:04:28 -- nvmf/common.sh@157 -- # true 00:08:27.212 04:04:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:27.212 Cannot find device "nvmf_tgt_br2" 00:08:27.212 04:04:28 -- nvmf/common.sh@158 -- # true 00:08:27.212 04:04:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:27.212 04:04:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:27.212 04:04:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:27.212 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:27.212 04:04:28 -- nvmf/common.sh@161 -- # true 00:08:27.212 04:04:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:27.212 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:27.212 04:04:28 -- nvmf/common.sh@162 -- # true 00:08:27.212 04:04:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:27.212 04:04:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:27.212 04:04:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:27.212 04:04:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:27.212 04:04:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:27.472 04:04:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:27.472 04:04:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:27.472 04:04:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:27.472 04:04:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:27.472 04:04:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:27.472 04:04:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:27.472 04:04:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:27.472 04:04:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:27.472 04:04:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:27.472 04:04:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:27.472 04:04:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:27.472 04:04:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:27.472 04:04:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:27.472 04:04:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:27.472 04:04:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:27.472 04:04:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:27.472 04:04:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:27.472 04:04:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:27.472 04:04:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:27.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:08:27.472 00:08:27.472 --- 10.0.0.2 ping statistics --- 00:08:27.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.472 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:08:27.472 04:04:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:27.472 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:27.472 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:08:27.472 00:08:27.472 --- 10.0.0.3 ping statistics --- 00:08:27.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.472 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:27.472 04:04:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:27.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:08:27.472 00:08:27.472 --- 10.0.0.1 ping statistics --- 00:08:27.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.472 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:08:27.472 04:04:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.472 04:04:29 -- nvmf/common.sh@421 -- # return 0 00:08:27.472 04:04:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:27.472 04:04:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.472 04:04:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:27.472 04:04:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:27.472 04:04:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.472 04:04:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:27.472 04:04:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:27.472 04:04:29 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:27.472 04:04:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:27.472 04:04:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:27.472 04:04:29 -- common/autotest_common.sh@10 -- # set +x 00:08:27.472 04:04:29 -- nvmf/common.sh@469 -- # nvmfpid=73717 00:08:27.472 04:04:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:27.472 04:04:29 -- nvmf/common.sh@470 -- # waitforlisten 73717 00:08:27.472 04:04:29 -- common/autotest_common.sh@829 -- # '[' -z 73717 ']' 00:08:27.472 04:04:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.472 04:04:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:27.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.472 04:04:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.472 04:04:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:27.472 04:04:29 -- common/autotest_common.sh@10 -- # set +x 00:08:27.472 [2024-11-26 04:04:29.190858] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:27.472 [2024-11-26 04:04:29.191086] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.730 [2024-11-26 04:04:29.335330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:27.730 [2024-11-26 04:04:29.419397] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:27.730 [2024-11-26 04:04:29.420780] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.730 [2024-11-26 04:04:29.420851] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.730 [2024-11-26 04:04:29.421008] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.730 [2024-11-26 04:04:29.421207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.730 [2024-11-26 04:04:29.421350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.730 [2024-11-26 04:04:29.421457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:27.730 [2024-11-26 04:04:29.421460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.666 04:04:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:28.666 04:04:30 -- common/autotest_common.sh@862 -- # return 0 00:08:28.666 04:04:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:28.666 04:04:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:28.666 04:04:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.666 04:04:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.666 04:04:30 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:28.666 04:04:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.666 04:04:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.666 [2024-11-26 04:04:30.238564] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.666 04:04:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.666 04:04:30 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:28.666 04:04:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.666 04:04:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.666 [2024-11-26 04:04:30.277261] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:28.666 04:04:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.666 04:04:30 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:28.666 04:04:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.666 04:04:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.666 04:04:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.666 04:04:30 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:28.666 04:04:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.667 04:04:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.667 04:04:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.667 04:04:30 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:28.667 04:04:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.667 04:04:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.667 04:04:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.667 04:04:30 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:28.667 04:04:30 -- target/referrals.sh@48 -- # jq length 00:08:28.667 04:04:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.667 04:04:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.667 04:04:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.667 04:04:30 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:28.667 04:04:30 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:28.667 04:04:30 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:28.667 04:04:30 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:28.667 04:04:30 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:28.667 04:04:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.667 04:04:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.667 04:04:30 -- target/referrals.sh@21 -- # sort 00:08:28.667 04:04:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.667 04:04:30 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:28.667 04:04:30 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:28.667 04:04:30 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:28.667 04:04:30 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:28.667 04:04:30 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:28.667 04:04:30 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:28.667 04:04:30 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:28.667 04:04:30 -- target/referrals.sh@26 -- # sort 00:08:28.926 04:04:30 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:28.926 04:04:30 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:28.926 04:04:30 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:28.926 04:04:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.926 04:04:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.926 04:04:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.926 04:04:30 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:28.926 04:04:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.926 04:04:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.926 04:04:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.926 04:04:30 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:28.926 04:04:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.926 04:04:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.926 04:04:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.926 04:04:30 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:28.926 04:04:30 -- target/referrals.sh@56 -- # jq length 00:08:28.926 04:04:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.926 04:04:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.926 04:04:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.926 04:04:30 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:28.926 04:04:30 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:28.926 04:04:30 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:28.926 04:04:30 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:28.926 04:04:30 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:28.926 04:04:30 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:28.926 04:04:30 -- target/referrals.sh@26 -- # sort 00:08:29.184 04:04:30 -- target/referrals.sh@26 -- # echo 00:08:29.184 04:04:30 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:29.185 04:04:30 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:29.185 04:04:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.185 04:04:30 -- common/autotest_common.sh@10 -- # set +x 00:08:29.185 04:04:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.185 04:04:30 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:29.185 04:04:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.185 04:04:30 -- common/autotest_common.sh@10 -- # set +x 00:08:29.185 04:04:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.185 04:04:30 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:29.185 04:04:30 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:29.185 04:04:30 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:29.185 04:04:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.185 04:04:30 -- common/autotest_common.sh@10 -- # set +x 00:08:29.185 04:04:30 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:29.185 04:04:30 -- target/referrals.sh@21 -- # sort 00:08:29.185 04:04:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.185 04:04:30 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:29.185 04:04:30 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:29.185 04:04:30 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:29.185 04:04:30 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:29.185 04:04:30 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:29.185 04:04:30 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:29.185 04:04:30 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:29.185 04:04:30 -- target/referrals.sh@26 -- # sort 00:08:29.444 04:04:30 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:29.444 04:04:30 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:29.444 04:04:30 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:29.444 04:04:30 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:29.444 04:04:30 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:29.444 04:04:30 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:29.444 04:04:30 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:29.444 04:04:31 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:29.444 04:04:31 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:29.444 04:04:31 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:29.444 04:04:31 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:29.444 04:04:31 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:29.444 04:04:31 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:29.444 04:04:31 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:29.444 04:04:31 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:29.444 04:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.444 04:04:31 -- common/autotest_common.sh@10 -- # set +x 00:08:29.702 04:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.702 04:04:31 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:29.702 04:04:31 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:29.702 04:04:31 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:29.702 04:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.702 04:04:31 -- common/autotest_common.sh@10 -- # set +x 00:08:29.702 04:04:31 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:29.702 04:04:31 -- target/referrals.sh@21 -- # sort 00:08:29.702 04:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.702 04:04:31 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:29.702 04:04:31 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:29.702 04:04:31 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:29.702 04:04:31 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:29.702 04:04:31 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:29.703 04:04:31 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:29.703 04:04:31 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:29.703 04:04:31 -- target/referrals.sh@26 -- # sort 00:08:29.703 04:04:31 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:29.703 04:04:31 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:29.703 04:04:31 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:29.703 04:04:31 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:29.703 04:04:31 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:29.703 04:04:31 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:29.703 04:04:31 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:29.962 04:04:31 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:29.962 04:04:31 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:29.962 04:04:31 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:29.962 04:04:31 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:29.962 04:04:31 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:29.962 04:04:31 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:29.962 04:04:31 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:29.962 04:04:31 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:29.962 04:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.962 04:04:31 -- common/autotest_common.sh@10 -- # set +x 00:08:29.962 04:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.962 04:04:31 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:29.962 04:04:31 -- target/referrals.sh@82 -- # jq length 00:08:29.962 04:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.962 04:04:31 -- common/autotest_common.sh@10 -- # set +x 00:08:29.962 04:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.962 04:04:31 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:29.962 04:04:31 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:29.962 04:04:31 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:29.962 04:04:31 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:29.962 04:04:31 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:29.962 04:04:31 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:29.962 04:04:31 -- target/referrals.sh@26 -- # sort 00:08:30.221 04:04:31 -- target/referrals.sh@26 -- # echo 00:08:30.221 04:04:31 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:30.221 04:04:31 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:30.221 04:04:31 -- target/referrals.sh@86 -- # nvmftestfini 00:08:30.221 04:04:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:30.221 04:04:31 -- nvmf/common.sh@116 -- # sync 00:08:30.221 04:04:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:30.221 04:04:31 -- nvmf/common.sh@119 -- # set +e 00:08:30.221 04:04:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:30.221 04:04:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:30.221 rmmod nvme_tcp 00:08:30.221 rmmod nvme_fabrics 00:08:30.221 rmmod nvme_keyring 00:08:30.221 04:04:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:30.221 04:04:31 -- nvmf/common.sh@123 -- # set -e 00:08:30.221 04:04:31 -- nvmf/common.sh@124 -- # return 0 00:08:30.221 04:04:31 -- nvmf/common.sh@477 -- # '[' -n 73717 ']' 00:08:30.221 04:04:31 -- nvmf/common.sh@478 -- # killprocess 73717 00:08:30.221 04:04:31 -- common/autotest_common.sh@936 -- # '[' -z 73717 ']' 00:08:30.221 04:04:31 -- common/autotest_common.sh@940 -- # kill -0 73717 00:08:30.221 04:04:31 -- common/autotest_common.sh@941 -- # uname 00:08:30.221 04:04:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:30.221 04:04:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73717 00:08:30.221 04:04:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:30.221 04:04:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:30.221 04:04:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73717' 00:08:30.221 killing process with pid 73717 00:08:30.221 04:04:31 -- common/autotest_common.sh@955 -- # kill 73717 00:08:30.221 04:04:31 -- common/autotest_common.sh@960 -- # wait 73717 00:08:30.480 04:04:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:30.480 04:04:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:30.480 04:04:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:30.480 04:04:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:30.480 04:04:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:30.480 04:04:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.480 04:04:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.480 04:04:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.739 04:04:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:30.739 00:08:30.739 real 0m3.688s 00:08:30.739 user 0m12.141s 00:08:30.739 sys 0m0.964s 00:08:30.739 04:04:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:30.739 04:04:32 -- common/autotest_common.sh@10 -- # set +x 00:08:30.739 ************************************ 00:08:30.739 END TEST nvmf_referrals 00:08:30.739 ************************************ 00:08:30.739 04:04:32 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:30.739 04:04:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:30.739 04:04:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:30.739 04:04:32 -- common/autotest_common.sh@10 -- # set +x 00:08:30.739 ************************************ 00:08:30.739 START TEST nvmf_connect_disconnect 00:08:30.739 ************************************ 00:08:30.739 04:04:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:30.739 * Looking for test storage... 00:08:30.739 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:30.739 04:04:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:30.739 04:04:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:30.739 04:04:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:30.739 04:04:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:30.739 04:04:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:30.739 04:04:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:30.739 04:04:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:30.739 04:04:32 -- scripts/common.sh@335 -- # IFS=.-: 00:08:30.739 04:04:32 -- scripts/common.sh@335 -- # read -ra ver1 00:08:30.739 04:04:32 -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.739 04:04:32 -- scripts/common.sh@336 -- # read -ra ver2 00:08:30.739 04:04:32 -- scripts/common.sh@337 -- # local 'op=<' 00:08:30.739 04:04:32 -- scripts/common.sh@339 -- # ver1_l=2 00:08:30.739 04:04:32 -- scripts/common.sh@340 -- # ver2_l=1 00:08:30.739 04:04:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:30.739 04:04:32 -- scripts/common.sh@343 -- # case "$op" in 00:08:30.739 04:04:32 -- scripts/common.sh@344 -- # : 1 00:08:30.739 04:04:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:30.739 04:04:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.739 04:04:32 -- scripts/common.sh@364 -- # decimal 1 00:08:30.739 04:04:32 -- scripts/common.sh@352 -- # local d=1 00:08:30.739 04:04:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.739 04:04:32 -- scripts/common.sh@354 -- # echo 1 00:08:30.739 04:04:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:30.739 04:04:32 -- scripts/common.sh@365 -- # decimal 2 00:08:30.739 04:04:32 -- scripts/common.sh@352 -- # local d=2 00:08:30.739 04:04:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.739 04:04:32 -- scripts/common.sh@354 -- # echo 2 00:08:30.739 04:04:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:30.739 04:04:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:30.739 04:04:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:30.739 04:04:32 -- scripts/common.sh@367 -- # return 0 00:08:30.739 04:04:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.739 04:04:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:30.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.739 --rc genhtml_branch_coverage=1 00:08:30.739 --rc genhtml_function_coverage=1 00:08:30.739 --rc genhtml_legend=1 00:08:30.739 --rc geninfo_all_blocks=1 00:08:30.739 --rc geninfo_unexecuted_blocks=1 00:08:30.739 00:08:30.739 ' 00:08:30.739 04:04:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:30.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.739 --rc genhtml_branch_coverage=1 00:08:30.739 --rc genhtml_function_coverage=1 00:08:30.739 --rc genhtml_legend=1 00:08:30.739 --rc geninfo_all_blocks=1 00:08:30.739 --rc geninfo_unexecuted_blocks=1 00:08:30.739 00:08:30.739 ' 00:08:30.739 04:04:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:30.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.739 --rc genhtml_branch_coverage=1 00:08:30.739 --rc genhtml_function_coverage=1 00:08:30.739 --rc genhtml_legend=1 00:08:30.739 --rc geninfo_all_blocks=1 00:08:30.739 --rc geninfo_unexecuted_blocks=1 00:08:30.739 00:08:30.739 ' 00:08:30.739 04:04:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:30.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.739 --rc genhtml_branch_coverage=1 00:08:30.739 --rc genhtml_function_coverage=1 00:08:30.739 --rc genhtml_legend=1 00:08:30.739 --rc geninfo_all_blocks=1 00:08:30.739 --rc geninfo_unexecuted_blocks=1 00:08:30.739 00:08:30.739 ' 00:08:30.739 04:04:32 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:30.739 04:04:32 -- nvmf/common.sh@7 -- # uname -s 00:08:30.739 04:04:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.739 04:04:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.739 04:04:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.739 04:04:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.739 04:04:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.739 04:04:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.739 04:04:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.739 04:04:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.739 04:04:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.739 04:04:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.000 04:04:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:08:31.000 04:04:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:08:31.000 04:04:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.000 04:04:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.000 04:04:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:31.000 04:04:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:31.000 04:04:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.000 04:04:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.000 04:04:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.000 04:04:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.000 04:04:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.000 04:04:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.000 04:04:32 -- paths/export.sh@5 -- # export PATH 00:08:31.000 04:04:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.000 04:04:32 -- nvmf/common.sh@46 -- # : 0 00:08:31.000 04:04:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:31.000 04:04:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:31.000 04:04:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:31.000 04:04:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.000 04:04:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.000 04:04:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:31.000 04:04:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:31.000 04:04:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:31.000 04:04:32 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:31.000 04:04:32 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:31.000 04:04:32 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:31.000 04:04:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:31.001 04:04:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.001 04:04:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:31.001 04:04:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:31.001 04:04:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:31.001 04:04:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.001 04:04:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.001 04:04:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.001 04:04:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:31.001 04:04:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:31.001 04:04:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:31.001 04:04:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:31.001 04:04:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:31.001 04:04:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:31.001 04:04:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.001 04:04:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.001 04:04:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:31.001 04:04:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:31.001 04:04:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:31.001 04:04:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:31.001 04:04:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:31.001 04:04:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.001 04:04:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:31.001 04:04:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:31.001 04:04:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:31.001 04:04:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:31.001 04:04:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:31.001 04:04:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:31.001 Cannot find device "nvmf_tgt_br" 00:08:31.001 04:04:32 -- nvmf/common.sh@154 -- # true 00:08:31.001 04:04:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:31.001 Cannot find device "nvmf_tgt_br2" 00:08:31.001 04:04:32 -- nvmf/common.sh@155 -- # true 00:08:31.001 04:04:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:31.001 04:04:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:31.001 Cannot find device "nvmf_tgt_br" 00:08:31.001 04:04:32 -- nvmf/common.sh@157 -- # true 00:08:31.001 04:04:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:31.001 Cannot find device "nvmf_tgt_br2" 00:08:31.001 04:04:32 -- nvmf/common.sh@158 -- # true 00:08:31.001 04:04:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:31.001 04:04:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:31.001 04:04:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:31.001 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.001 04:04:32 -- nvmf/common.sh@161 -- # true 00:08:31.001 04:04:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:31.001 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.001 04:04:32 -- nvmf/common.sh@162 -- # true 00:08:31.002 04:04:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:31.002 04:04:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:31.002 04:04:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:31.002 04:04:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:31.002 04:04:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:31.002 04:04:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:31.002 04:04:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:31.002 04:04:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:31.002 04:04:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:31.002 04:04:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:31.002 04:04:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:31.002 04:04:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:31.002 04:04:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:31.002 04:04:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:31.263 04:04:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:31.263 04:04:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:31.263 04:04:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:31.263 04:04:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:31.263 04:04:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:31.263 04:04:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:31.263 04:04:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:31.263 04:04:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:31.263 04:04:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:31.263 04:04:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:31.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:08:31.263 00:08:31.263 --- 10.0.0.2 ping statistics --- 00:08:31.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.263 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:08:31.263 04:04:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:31.263 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:31.263 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:08:31.263 00:08:31.263 --- 10.0.0.3 ping statistics --- 00:08:31.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.263 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:08:31.263 04:04:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:31.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:08:31.263 00:08:31.263 --- 10.0.0.1 ping statistics --- 00:08:31.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.263 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:08:31.263 04:04:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.263 04:04:32 -- nvmf/common.sh@421 -- # return 0 00:08:31.263 04:04:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:31.263 04:04:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.263 04:04:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:31.263 04:04:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:31.263 04:04:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.263 04:04:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:31.263 04:04:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:31.263 04:04:32 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:31.263 04:04:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:31.263 04:04:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:31.263 04:04:32 -- common/autotest_common.sh@10 -- # set +x 00:08:31.263 04:04:32 -- nvmf/common.sh@469 -- # nvmfpid=74037 00:08:31.263 04:04:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:31.263 04:04:32 -- nvmf/common.sh@470 -- # waitforlisten 74037 00:08:31.263 04:04:32 -- common/autotest_common.sh@829 -- # '[' -z 74037 ']' 00:08:31.263 04:04:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.263 04:04:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:31.263 04:04:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.263 04:04:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:31.263 04:04:32 -- common/autotest_common.sh@10 -- # set +x 00:08:31.263 [2024-11-26 04:04:32.932591] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:31.263 [2024-11-26 04:04:32.932670] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.522 [2024-11-26 04:04:33.072302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.522 [2024-11-26 04:04:33.151333] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:31.522 [2024-11-26 04:04:33.151630] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.522 [2024-11-26 04:04:33.151761] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.522 [2024-11-26 04:04:33.151859] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.522 [2024-11-26 04:04:33.152031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.522 [2024-11-26 04:04:33.152122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.522 [2024-11-26 04:04:33.152594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.522 [2024-11-26 04:04:33.152606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.460 04:04:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.460 04:04:33 -- common/autotest_common.sh@862 -- # return 0 00:08:32.460 04:04:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:32.460 04:04:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:32.460 04:04:33 -- common/autotest_common.sh@10 -- # set +x 00:08:32.460 04:04:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.460 04:04:34 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:32.460 04:04:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.460 04:04:34 -- common/autotest_common.sh@10 -- # set +x 00:08:32.460 [2024-11-26 04:04:34.012442] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.460 04:04:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.460 04:04:34 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:32.460 04:04:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.460 04:04:34 -- common/autotest_common.sh@10 -- # set +x 00:08:32.460 04:04:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.460 04:04:34 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:32.460 04:04:34 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:32.460 04:04:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.460 04:04:34 -- common/autotest_common.sh@10 -- # set +x 00:08:32.460 04:04:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.460 04:04:34 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:32.460 04:04:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.460 04:04:34 -- common/autotest_common.sh@10 -- # set +x 00:08:32.460 04:04:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.460 04:04:34 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.460 04:04:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.460 04:04:34 -- common/autotest_common.sh@10 -- # set +x 00:08:32.460 [2024-11-26 04:04:34.089955] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.460 04:04:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.460 04:04:34 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:32.460 04:04:34 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:32.460 04:04:34 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:32.460 04:04:34 -- target/connect_disconnect.sh@34 -- # set +x 00:08:34.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.165 04:08:20 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:19.165 04:08:20 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:19.165 04:08:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:19.165 04:08:20 -- nvmf/common.sh@116 -- # sync 00:12:19.165 04:08:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:19.165 04:08:20 -- nvmf/common.sh@119 -- # set +e 00:12:19.165 04:08:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:19.165 04:08:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:19.165 rmmod nvme_tcp 00:12:19.165 rmmod nvme_fabrics 00:12:19.165 rmmod nvme_keyring 00:12:19.165 04:08:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:19.165 04:08:20 -- nvmf/common.sh@123 -- # set -e 00:12:19.165 04:08:20 -- nvmf/common.sh@124 -- # return 0 00:12:19.165 04:08:20 -- nvmf/common.sh@477 -- # '[' -n 74037 ']' 00:12:19.165 04:08:20 -- nvmf/common.sh@478 -- # killprocess 74037 00:12:19.165 04:08:20 -- common/autotest_common.sh@936 -- # '[' -z 74037 ']' 00:12:19.165 04:08:20 -- common/autotest_common.sh@940 -- # kill -0 74037 00:12:19.165 04:08:20 -- common/autotest_common.sh@941 -- # uname 00:12:19.165 04:08:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:19.165 04:08:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74037 00:12:19.165 04:08:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:19.165 04:08:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:19.165 killing process with pid 74037 00:12:19.165 04:08:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74037' 00:12:19.165 04:08:20 -- common/autotest_common.sh@955 -- # kill 74037 00:12:19.165 04:08:20 -- common/autotest_common.sh@960 -- # wait 74037 00:12:19.165 04:08:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:19.165 04:08:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:19.165 04:08:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:19.165 04:08:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:19.165 04:08:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:19.165 04:08:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.165 04:08:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.165 04:08:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.165 04:08:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:19.165 00:12:19.166 real 3m48.529s 00:12:19.166 user 14m55.640s 00:12:19.166 sys 0m17.603s 00:12:19.166 04:08:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:19.166 04:08:20 -- common/autotest_common.sh@10 -- # set +x 00:12:19.166 ************************************ 00:12:19.166 END TEST nvmf_connect_disconnect 00:12:19.166 ************************************ 00:12:19.166 04:08:20 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:19.166 04:08:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:19.166 04:08:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:19.166 04:08:20 -- common/autotest_common.sh@10 -- # set +x 00:12:19.166 ************************************ 00:12:19.166 START TEST nvmf_multitarget 00:12:19.166 ************************************ 00:12:19.166 04:08:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:19.425 * Looking for test storage... 00:12:19.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:19.425 04:08:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:19.425 04:08:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:19.425 04:08:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:19.425 04:08:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:19.425 04:08:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:19.425 04:08:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:19.425 04:08:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:19.425 04:08:21 -- scripts/common.sh@335 -- # IFS=.-: 00:12:19.425 04:08:21 -- scripts/common.sh@335 -- # read -ra ver1 00:12:19.425 04:08:21 -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.425 04:08:21 -- scripts/common.sh@336 -- # read -ra ver2 00:12:19.425 04:08:21 -- scripts/common.sh@337 -- # local 'op=<' 00:12:19.425 04:08:21 -- scripts/common.sh@339 -- # ver1_l=2 00:12:19.425 04:08:21 -- scripts/common.sh@340 -- # ver2_l=1 00:12:19.425 04:08:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:19.425 04:08:21 -- scripts/common.sh@343 -- # case "$op" in 00:12:19.425 04:08:21 -- scripts/common.sh@344 -- # : 1 00:12:19.425 04:08:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:19.425 04:08:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.425 04:08:21 -- scripts/common.sh@364 -- # decimal 1 00:12:19.425 04:08:21 -- scripts/common.sh@352 -- # local d=1 00:12:19.425 04:08:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.425 04:08:21 -- scripts/common.sh@354 -- # echo 1 00:12:19.425 04:08:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:19.425 04:08:21 -- scripts/common.sh@365 -- # decimal 2 00:12:19.425 04:08:21 -- scripts/common.sh@352 -- # local d=2 00:12:19.425 04:08:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.425 04:08:21 -- scripts/common.sh@354 -- # echo 2 00:12:19.425 04:08:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:19.425 04:08:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:19.425 04:08:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:19.425 04:08:21 -- scripts/common.sh@367 -- # return 0 00:12:19.425 04:08:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.425 04:08:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:19.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.425 --rc genhtml_branch_coverage=1 00:12:19.425 --rc genhtml_function_coverage=1 00:12:19.425 --rc genhtml_legend=1 00:12:19.425 --rc geninfo_all_blocks=1 00:12:19.425 --rc geninfo_unexecuted_blocks=1 00:12:19.425 00:12:19.425 ' 00:12:19.425 04:08:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:19.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.425 --rc genhtml_branch_coverage=1 00:12:19.425 --rc genhtml_function_coverage=1 00:12:19.425 --rc genhtml_legend=1 00:12:19.425 --rc geninfo_all_blocks=1 00:12:19.425 --rc geninfo_unexecuted_blocks=1 00:12:19.425 00:12:19.425 ' 00:12:19.425 04:08:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:19.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.425 --rc genhtml_branch_coverage=1 00:12:19.425 --rc genhtml_function_coverage=1 00:12:19.425 --rc genhtml_legend=1 00:12:19.425 --rc geninfo_all_blocks=1 00:12:19.425 --rc geninfo_unexecuted_blocks=1 00:12:19.425 00:12:19.425 ' 00:12:19.425 04:08:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:19.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.425 --rc genhtml_branch_coverage=1 00:12:19.425 --rc genhtml_function_coverage=1 00:12:19.425 --rc genhtml_legend=1 00:12:19.425 --rc geninfo_all_blocks=1 00:12:19.425 --rc geninfo_unexecuted_blocks=1 00:12:19.425 00:12:19.425 ' 00:12:19.425 04:08:21 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:19.425 04:08:21 -- nvmf/common.sh@7 -- # uname -s 00:12:19.425 04:08:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.425 04:08:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.425 04:08:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.425 04:08:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.426 04:08:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.426 04:08:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.426 04:08:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.426 04:08:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.426 04:08:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.426 04:08:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.426 04:08:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:12:19.426 04:08:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:12:19.426 04:08:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.426 04:08:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.426 04:08:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:19.426 04:08:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:19.426 04:08:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.426 04:08:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.426 04:08:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.426 04:08:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.426 04:08:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.426 04:08:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.426 04:08:21 -- paths/export.sh@5 -- # export PATH 00:12:19.426 04:08:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.426 04:08:21 -- nvmf/common.sh@46 -- # : 0 00:12:19.426 04:08:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:19.426 04:08:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:19.426 04:08:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:19.426 04:08:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.426 04:08:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.426 04:08:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:19.426 04:08:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:19.426 04:08:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:19.426 04:08:21 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:19.426 04:08:21 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:19.426 04:08:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:19.426 04:08:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.426 04:08:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:19.426 04:08:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:19.426 04:08:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:19.426 04:08:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.426 04:08:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.426 04:08:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.426 04:08:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:19.426 04:08:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:19.426 04:08:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:19.426 04:08:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:19.426 04:08:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:19.426 04:08:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:19.426 04:08:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.426 04:08:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:19.426 04:08:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:19.426 04:08:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:19.426 04:08:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:19.426 04:08:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:19.426 04:08:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:19.426 04:08:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.426 04:08:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:19.426 04:08:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:19.426 04:08:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:19.426 04:08:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:19.426 04:08:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:19.426 04:08:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:19.426 Cannot find device "nvmf_tgt_br" 00:12:19.426 04:08:21 -- nvmf/common.sh@154 -- # true 00:12:19.426 04:08:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:19.426 Cannot find device "nvmf_tgt_br2" 00:12:19.426 04:08:21 -- nvmf/common.sh@155 -- # true 00:12:19.426 04:08:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:19.426 04:08:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:19.426 Cannot find device "nvmf_tgt_br" 00:12:19.426 04:08:21 -- nvmf/common.sh@157 -- # true 00:12:19.426 04:08:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:19.426 Cannot find device "nvmf_tgt_br2" 00:12:19.426 04:08:21 -- nvmf/common.sh@158 -- # true 00:12:19.426 04:08:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:19.686 04:08:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:19.686 04:08:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:19.686 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:19.686 04:08:21 -- nvmf/common.sh@161 -- # true 00:12:19.686 04:08:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:19.686 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:19.686 04:08:21 -- nvmf/common.sh@162 -- # true 00:12:19.686 04:08:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:19.686 04:08:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:19.686 04:08:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:19.686 04:08:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:19.686 04:08:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:19.686 04:08:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:19.686 04:08:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:19.686 04:08:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:19.686 04:08:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:19.686 04:08:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:19.686 04:08:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:19.686 04:08:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:19.686 04:08:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:19.686 04:08:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:19.686 04:08:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:19.686 04:08:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:19.686 04:08:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:19.686 04:08:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:19.686 04:08:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:19.686 04:08:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:19.686 04:08:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:19.686 04:08:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:19.686 04:08:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:19.686 04:08:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:19.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:12:19.686 00:12:19.686 --- 10.0.0.2 ping statistics --- 00:12:19.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.686 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:12:19.686 04:08:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:19.686 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:19.686 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:12:19.686 00:12:19.686 --- 10.0.0.3 ping statistics --- 00:12:19.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.686 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:12:19.686 04:08:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:19.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:12:19.686 00:12:19.686 --- 10.0.0.1 ping statistics --- 00:12:19.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.686 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:12:19.686 04:08:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.686 04:08:21 -- nvmf/common.sh@421 -- # return 0 00:12:19.686 04:08:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:19.686 04:08:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.686 04:08:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:19.686 04:08:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:19.686 04:08:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.686 04:08:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:19.686 04:08:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:19.686 04:08:21 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:19.686 04:08:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:19.686 04:08:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:19.686 04:08:21 -- common/autotest_common.sh@10 -- # set +x 00:12:19.686 04:08:21 -- nvmf/common.sh@469 -- # nvmfpid=77856 00:12:19.686 04:08:21 -- nvmf/common.sh@470 -- # waitforlisten 77856 00:12:19.686 04:08:21 -- common/autotest_common.sh@829 -- # '[' -z 77856 ']' 00:12:19.686 04:08:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:19.686 04:08:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.686 04:08:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:19.686 04:08:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.686 04:08:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:19.686 04:08:21 -- common/autotest_common.sh@10 -- # set +x 00:12:19.946 [2024-11-26 04:08:21.478815] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:19.946 [2024-11-26 04:08:21.478912] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.946 [2024-11-26 04:08:21.620989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:19.946 [2024-11-26 04:08:21.691812] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:19.946 [2024-11-26 04:08:21.691952] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.946 [2024-11-26 04:08:21.691964] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.946 [2024-11-26 04:08:21.691971] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.946 [2024-11-26 04:08:21.692095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.946 [2024-11-26 04:08:21.692639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.946 [2024-11-26 04:08:21.692824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.946 [2024-11-26 04:08:21.692879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.882 04:08:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:20.882 04:08:22 -- common/autotest_common.sh@862 -- # return 0 00:12:20.882 04:08:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:20.882 04:08:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:20.882 04:08:22 -- common/autotest_common.sh@10 -- # set +x 00:12:20.882 04:08:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.882 04:08:22 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:20.882 04:08:22 -- target/multitarget.sh@21 -- # jq length 00:12:20.882 04:08:22 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:21.141 04:08:22 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:21.141 04:08:22 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:21.141 "nvmf_tgt_1" 00:12:21.141 04:08:22 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:21.141 "nvmf_tgt_2" 00:12:21.400 04:08:22 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:21.400 04:08:22 -- target/multitarget.sh@28 -- # jq length 00:12:21.400 04:08:23 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:21.400 04:08:23 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:21.659 true 00:12:21.659 04:08:23 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:21.659 true 00:12:21.659 04:08:23 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:21.659 04:08:23 -- target/multitarget.sh@35 -- # jq length 00:12:21.918 04:08:23 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:21.918 04:08:23 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:21.918 04:08:23 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:21.918 04:08:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:21.918 04:08:23 -- nvmf/common.sh@116 -- # sync 00:12:21.918 04:08:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:21.918 04:08:23 -- nvmf/common.sh@119 -- # set +e 00:12:21.918 04:08:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:21.918 04:08:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:21.918 rmmod nvme_tcp 00:12:21.918 rmmod nvme_fabrics 00:12:21.918 rmmod nvme_keyring 00:12:21.918 04:08:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:21.918 04:08:23 -- nvmf/common.sh@123 -- # set -e 00:12:21.918 04:08:23 -- nvmf/common.sh@124 -- # return 0 00:12:21.918 04:08:23 -- nvmf/common.sh@477 -- # '[' -n 77856 ']' 00:12:21.918 04:08:23 -- nvmf/common.sh@478 -- # killprocess 77856 00:12:21.918 04:08:23 -- common/autotest_common.sh@936 -- # '[' -z 77856 ']' 00:12:21.918 04:08:23 -- common/autotest_common.sh@940 -- # kill -0 77856 00:12:21.918 04:08:23 -- common/autotest_common.sh@941 -- # uname 00:12:21.918 04:08:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:21.918 04:08:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77856 00:12:21.918 04:08:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:21.918 04:08:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:21.918 killing process with pid 77856 00:12:21.918 04:08:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77856' 00:12:21.918 04:08:23 -- common/autotest_common.sh@955 -- # kill 77856 00:12:21.918 04:08:23 -- common/autotest_common.sh@960 -- # wait 77856 00:12:22.177 04:08:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:22.177 04:08:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:22.177 04:08:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:22.177 04:08:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:22.177 04:08:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:22.177 04:08:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.177 04:08:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.177 04:08:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.177 04:08:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:22.177 ************************************ 00:12:22.177 END TEST nvmf_multitarget 00:12:22.177 ************************************ 00:12:22.177 00:12:22.177 real 0m2.978s 00:12:22.177 user 0m9.764s 00:12:22.177 sys 0m0.730s 00:12:22.178 04:08:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:22.178 04:08:23 -- common/autotest_common.sh@10 -- # set +x 00:12:22.178 04:08:23 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:22.178 04:08:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:22.178 04:08:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:22.178 04:08:23 -- common/autotest_common.sh@10 -- # set +x 00:12:22.178 ************************************ 00:12:22.178 START TEST nvmf_rpc 00:12:22.178 ************************************ 00:12:22.178 04:08:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:22.437 * Looking for test storage... 00:12:22.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:22.437 04:08:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:22.437 04:08:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:22.437 04:08:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:22.437 04:08:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:22.437 04:08:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:22.437 04:08:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:22.437 04:08:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:22.437 04:08:24 -- scripts/common.sh@335 -- # IFS=.-: 00:12:22.437 04:08:24 -- scripts/common.sh@335 -- # read -ra ver1 00:12:22.437 04:08:24 -- scripts/common.sh@336 -- # IFS=.-: 00:12:22.437 04:08:24 -- scripts/common.sh@336 -- # read -ra ver2 00:12:22.437 04:08:24 -- scripts/common.sh@337 -- # local 'op=<' 00:12:22.437 04:08:24 -- scripts/common.sh@339 -- # ver1_l=2 00:12:22.437 04:08:24 -- scripts/common.sh@340 -- # ver2_l=1 00:12:22.437 04:08:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:22.437 04:08:24 -- scripts/common.sh@343 -- # case "$op" in 00:12:22.437 04:08:24 -- scripts/common.sh@344 -- # : 1 00:12:22.437 04:08:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:22.437 04:08:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:22.437 04:08:24 -- scripts/common.sh@364 -- # decimal 1 00:12:22.437 04:08:24 -- scripts/common.sh@352 -- # local d=1 00:12:22.437 04:08:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:22.437 04:08:24 -- scripts/common.sh@354 -- # echo 1 00:12:22.437 04:08:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:22.437 04:08:24 -- scripts/common.sh@365 -- # decimal 2 00:12:22.437 04:08:24 -- scripts/common.sh@352 -- # local d=2 00:12:22.437 04:08:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:22.437 04:08:24 -- scripts/common.sh@354 -- # echo 2 00:12:22.437 04:08:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:22.437 04:08:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:22.437 04:08:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:22.437 04:08:24 -- scripts/common.sh@367 -- # return 0 00:12:22.437 04:08:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:22.437 04:08:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:22.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.438 --rc genhtml_branch_coverage=1 00:12:22.438 --rc genhtml_function_coverage=1 00:12:22.438 --rc genhtml_legend=1 00:12:22.438 --rc geninfo_all_blocks=1 00:12:22.438 --rc geninfo_unexecuted_blocks=1 00:12:22.438 00:12:22.438 ' 00:12:22.438 04:08:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:22.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.438 --rc genhtml_branch_coverage=1 00:12:22.438 --rc genhtml_function_coverage=1 00:12:22.438 --rc genhtml_legend=1 00:12:22.438 --rc geninfo_all_blocks=1 00:12:22.438 --rc geninfo_unexecuted_blocks=1 00:12:22.438 00:12:22.438 ' 00:12:22.438 04:08:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:22.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.438 --rc genhtml_branch_coverage=1 00:12:22.438 --rc genhtml_function_coverage=1 00:12:22.438 --rc genhtml_legend=1 00:12:22.438 --rc geninfo_all_blocks=1 00:12:22.438 --rc geninfo_unexecuted_blocks=1 00:12:22.438 00:12:22.438 ' 00:12:22.438 04:08:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:22.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.438 --rc genhtml_branch_coverage=1 00:12:22.438 --rc genhtml_function_coverage=1 00:12:22.438 --rc genhtml_legend=1 00:12:22.438 --rc geninfo_all_blocks=1 00:12:22.438 --rc geninfo_unexecuted_blocks=1 00:12:22.438 00:12:22.438 ' 00:12:22.438 04:08:24 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:22.438 04:08:24 -- nvmf/common.sh@7 -- # uname -s 00:12:22.438 04:08:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.438 04:08:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.438 04:08:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.438 04:08:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.438 04:08:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.438 04:08:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.438 04:08:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.438 04:08:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.438 04:08:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.438 04:08:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.438 04:08:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:12:22.438 04:08:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:12:22.438 04:08:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.438 04:08:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.438 04:08:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:22.438 04:08:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:22.438 04:08:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.438 04:08:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.438 04:08:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.438 04:08:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.438 04:08:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.438 04:08:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.438 04:08:24 -- paths/export.sh@5 -- # export PATH 00:12:22.438 04:08:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.438 04:08:24 -- nvmf/common.sh@46 -- # : 0 00:12:22.438 04:08:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:22.438 04:08:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:22.438 04:08:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:22.438 04:08:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.438 04:08:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.438 04:08:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:22.438 04:08:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:22.438 04:08:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:22.438 04:08:24 -- target/rpc.sh@11 -- # loops=5 00:12:22.438 04:08:24 -- target/rpc.sh@23 -- # nvmftestinit 00:12:22.438 04:08:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:22.438 04:08:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.438 04:08:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:22.438 04:08:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:22.438 04:08:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:22.438 04:08:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.438 04:08:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.438 04:08:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.438 04:08:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:22.438 04:08:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:22.438 04:08:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:22.438 04:08:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:22.438 04:08:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:22.438 04:08:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:22.438 04:08:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.438 04:08:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.438 04:08:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:22.438 04:08:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:22.438 04:08:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:22.438 04:08:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:22.438 04:08:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:22.438 04:08:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.438 04:08:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:22.438 04:08:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:22.438 04:08:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:22.438 04:08:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:22.438 04:08:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:22.438 04:08:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:22.438 Cannot find device "nvmf_tgt_br" 00:12:22.438 04:08:24 -- nvmf/common.sh@154 -- # true 00:12:22.438 04:08:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:22.438 Cannot find device "nvmf_tgt_br2" 00:12:22.438 04:08:24 -- nvmf/common.sh@155 -- # true 00:12:22.438 04:08:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:22.438 04:08:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:22.438 Cannot find device "nvmf_tgt_br" 00:12:22.438 04:08:24 -- nvmf/common.sh@157 -- # true 00:12:22.438 04:08:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:22.438 Cannot find device "nvmf_tgt_br2" 00:12:22.438 04:08:24 -- nvmf/common.sh@158 -- # true 00:12:22.438 04:08:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:22.697 04:08:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:22.698 04:08:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:22.698 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:22.698 04:08:24 -- nvmf/common.sh@161 -- # true 00:12:22.698 04:08:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:22.698 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:22.698 04:08:24 -- nvmf/common.sh@162 -- # true 00:12:22.698 04:08:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:22.698 04:08:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:22.698 04:08:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:22.698 04:08:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:22.698 04:08:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:22.698 04:08:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:22.698 04:08:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:22.698 04:08:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:22.698 04:08:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:22.698 04:08:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:22.698 04:08:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:22.698 04:08:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:22.698 04:08:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:22.698 04:08:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:22.698 04:08:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:22.698 04:08:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:22.698 04:08:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:22.698 04:08:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:22.698 04:08:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:22.698 04:08:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:22.957 04:08:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:22.957 04:08:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:22.957 04:08:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:22.957 04:08:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:22.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:12:22.957 00:12:22.957 --- 10.0.0.2 ping statistics --- 00:12:22.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.957 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:12:22.957 04:08:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:22.957 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:22.957 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:12:22.957 00:12:22.957 --- 10.0.0.3 ping statistics --- 00:12:22.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.957 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:12:22.957 04:08:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:22.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:12:22.957 00:12:22.957 --- 10.0.0.1 ping statistics --- 00:12:22.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.957 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:12:22.957 04:08:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.957 04:08:24 -- nvmf/common.sh@421 -- # return 0 00:12:22.957 04:08:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:22.957 04:08:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.957 04:08:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:22.957 04:08:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:22.957 04:08:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.957 04:08:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:22.957 04:08:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:22.957 04:08:24 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:22.957 04:08:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:22.957 04:08:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:22.957 04:08:24 -- common/autotest_common.sh@10 -- # set +x 00:12:22.957 04:08:24 -- nvmf/common.sh@469 -- # nvmfpid=78097 00:12:22.957 04:08:24 -- nvmf/common.sh@470 -- # waitforlisten 78097 00:12:22.957 04:08:24 -- common/autotest_common.sh@829 -- # '[' -z 78097 ']' 00:12:22.957 04:08:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:22.957 04:08:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.957 04:08:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:22.957 04:08:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.957 04:08:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:22.957 04:08:24 -- common/autotest_common.sh@10 -- # set +x 00:12:22.957 [2024-11-26 04:08:24.575880] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:22.957 [2024-11-26 04:08:24.575976] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.957 [2024-11-26 04:08:24.716292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.217 [2024-11-26 04:08:24.787723] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:23.217 [2024-11-26 04:08:24.787861] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.217 [2024-11-26 04:08:24.787872] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.217 [2024-11-26 04:08:24.787880] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.217 [2024-11-26 04:08:24.788035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.217 [2024-11-26 04:08:24.788308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.217 [2024-11-26 04:08:24.789011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:23.217 [2024-11-26 04:08:24.789018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.785 04:08:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:23.785 04:08:25 -- common/autotest_common.sh@862 -- # return 0 00:12:23.785 04:08:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:23.785 04:08:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:23.785 04:08:25 -- common/autotest_common.sh@10 -- # set +x 00:12:23.785 04:08:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.786 04:08:25 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:23.786 04:08:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.786 04:08:25 -- common/autotest_common.sh@10 -- # set +x 00:12:24.045 04:08:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.045 04:08:25 -- target/rpc.sh@26 -- # stats='{ 00:12:24.045 "poll_groups": [ 00:12:24.045 { 00:12:24.045 "admin_qpairs": 0, 00:12:24.045 "completed_nvme_io": 0, 00:12:24.045 "current_admin_qpairs": 0, 00:12:24.045 "current_io_qpairs": 0, 00:12:24.045 "io_qpairs": 0, 00:12:24.045 "name": "nvmf_tgt_poll_group_0", 00:12:24.045 "pending_bdev_io": 0, 00:12:24.045 "transports": [] 00:12:24.045 }, 00:12:24.045 { 00:12:24.045 "admin_qpairs": 0, 00:12:24.045 "completed_nvme_io": 0, 00:12:24.045 "current_admin_qpairs": 0, 00:12:24.045 "current_io_qpairs": 0, 00:12:24.045 "io_qpairs": 0, 00:12:24.045 "name": "nvmf_tgt_poll_group_1", 00:12:24.045 "pending_bdev_io": 0, 00:12:24.045 "transports": [] 00:12:24.045 }, 00:12:24.045 { 00:12:24.045 "admin_qpairs": 0, 00:12:24.045 "completed_nvme_io": 0, 00:12:24.045 "current_admin_qpairs": 0, 00:12:24.045 "current_io_qpairs": 0, 00:12:24.045 "io_qpairs": 0, 00:12:24.045 "name": "nvmf_tgt_poll_group_2", 00:12:24.045 "pending_bdev_io": 0, 00:12:24.045 "transports": [] 00:12:24.045 }, 00:12:24.045 { 00:12:24.045 "admin_qpairs": 0, 00:12:24.045 "completed_nvme_io": 0, 00:12:24.045 "current_admin_qpairs": 0, 00:12:24.045 "current_io_qpairs": 0, 00:12:24.045 "io_qpairs": 0, 00:12:24.045 "name": "nvmf_tgt_poll_group_3", 00:12:24.045 "pending_bdev_io": 0, 00:12:24.045 "transports": [] 00:12:24.045 } 00:12:24.045 ], 00:12:24.045 "tick_rate": 2200000000 00:12:24.045 }' 00:12:24.045 04:08:25 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:24.045 04:08:25 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:24.045 04:08:25 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:24.045 04:08:25 -- target/rpc.sh@15 -- # wc -l 00:12:24.045 04:08:25 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:24.045 04:08:25 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:24.045 04:08:25 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:24.045 04:08:25 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:24.045 04:08:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.045 04:08:25 -- common/autotest_common.sh@10 -- # set +x 00:12:24.045 [2024-11-26 04:08:25.672312] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:24.045 04:08:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.045 04:08:25 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:24.045 04:08:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.045 04:08:25 -- common/autotest_common.sh@10 -- # set +x 00:12:24.045 04:08:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.045 04:08:25 -- target/rpc.sh@33 -- # stats='{ 00:12:24.045 "poll_groups": [ 00:12:24.045 { 00:12:24.045 "admin_qpairs": 0, 00:12:24.045 "completed_nvme_io": 0, 00:12:24.045 "current_admin_qpairs": 0, 00:12:24.045 "current_io_qpairs": 0, 00:12:24.045 "io_qpairs": 0, 00:12:24.045 "name": "nvmf_tgt_poll_group_0", 00:12:24.045 "pending_bdev_io": 0, 00:12:24.045 "transports": [ 00:12:24.045 { 00:12:24.045 "trtype": "TCP" 00:12:24.045 } 00:12:24.045 ] 00:12:24.045 }, 00:12:24.045 { 00:12:24.045 "admin_qpairs": 0, 00:12:24.045 "completed_nvme_io": 0, 00:12:24.045 "current_admin_qpairs": 0, 00:12:24.045 "current_io_qpairs": 0, 00:12:24.045 "io_qpairs": 0, 00:12:24.045 "name": "nvmf_tgt_poll_group_1", 00:12:24.045 "pending_bdev_io": 0, 00:12:24.045 "transports": [ 00:12:24.045 { 00:12:24.045 "trtype": "TCP" 00:12:24.045 } 00:12:24.045 ] 00:12:24.045 }, 00:12:24.045 { 00:12:24.045 "admin_qpairs": 0, 00:12:24.045 "completed_nvme_io": 0, 00:12:24.045 "current_admin_qpairs": 0, 00:12:24.045 "current_io_qpairs": 0, 00:12:24.045 "io_qpairs": 0, 00:12:24.045 "name": "nvmf_tgt_poll_group_2", 00:12:24.045 "pending_bdev_io": 0, 00:12:24.045 "transports": [ 00:12:24.045 { 00:12:24.045 "trtype": "TCP" 00:12:24.045 } 00:12:24.045 ] 00:12:24.045 }, 00:12:24.045 { 00:12:24.045 "admin_qpairs": 0, 00:12:24.045 "completed_nvme_io": 0, 00:12:24.045 "current_admin_qpairs": 0, 00:12:24.045 "current_io_qpairs": 0, 00:12:24.045 "io_qpairs": 0, 00:12:24.045 "name": "nvmf_tgt_poll_group_3", 00:12:24.045 "pending_bdev_io": 0, 00:12:24.045 "transports": [ 00:12:24.045 { 00:12:24.045 "trtype": "TCP" 00:12:24.045 } 00:12:24.045 ] 00:12:24.045 } 00:12:24.045 ], 00:12:24.045 "tick_rate": 2200000000 00:12:24.045 }' 00:12:24.045 04:08:25 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:24.045 04:08:25 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:24.045 04:08:25 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:24.045 04:08:25 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:24.045 04:08:25 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:24.045 04:08:25 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:24.045 04:08:25 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:24.045 04:08:25 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:24.045 04:08:25 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:24.305 04:08:25 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:24.305 04:08:25 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:24.305 04:08:25 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:24.305 04:08:25 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:24.305 04:08:25 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:24.305 04:08:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.305 04:08:25 -- common/autotest_common.sh@10 -- # set +x 00:12:24.305 Malloc1 00:12:24.305 04:08:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.305 04:08:25 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:24.305 04:08:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.305 04:08:25 -- common/autotest_common.sh@10 -- # set +x 00:12:24.305 04:08:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.305 04:08:25 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:24.305 04:08:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.305 04:08:25 -- common/autotest_common.sh@10 -- # set +x 00:12:24.305 04:08:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.305 04:08:25 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:24.305 04:08:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.305 04:08:25 -- common/autotest_common.sh@10 -- # set +x 00:12:24.305 04:08:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.305 04:08:25 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.305 04:08:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.305 04:08:25 -- common/autotest_common.sh@10 -- # set +x 00:12:24.305 [2024-11-26 04:08:25.891833] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.305 04:08:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.305 04:08:25 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 -a 10.0.0.2 -s 4420 00:12:24.305 04:08:25 -- common/autotest_common.sh@650 -- # local es=0 00:12:24.305 04:08:25 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 -a 10.0.0.2 -s 4420 00:12:24.305 04:08:25 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:24.305 04:08:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:24.305 04:08:25 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:24.305 04:08:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:24.305 04:08:25 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:24.305 04:08:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:24.305 04:08:25 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:24.305 04:08:25 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:24.305 04:08:25 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 -a 10.0.0.2 -s 4420 00:12:24.305 [2024-11-26 04:08:25.920150] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157' 00:12:24.305 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:24.305 could not add new controller: failed to write to nvme-fabrics device 00:12:24.305 04:08:25 -- common/autotest_common.sh@653 -- # es=1 00:12:24.305 04:08:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:24.305 04:08:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:24.305 04:08:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:24.305 04:08:25 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:12:24.305 04:08:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.305 04:08:25 -- common/autotest_common.sh@10 -- # set +x 00:12:24.305 04:08:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.305 04:08:25 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:24.564 04:08:26 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:24.564 04:08:26 -- common/autotest_common.sh@1187 -- # local i=0 00:12:24.564 04:08:26 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:24.564 04:08:26 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:24.564 04:08:26 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:26.468 04:08:28 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:26.468 04:08:28 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:26.468 04:08:28 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:26.468 04:08:28 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:26.468 04:08:28 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:26.468 04:08:28 -- common/autotest_common.sh@1197 -- # return 0 00:12:26.468 04:08:28 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.468 04:08:28 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.468 04:08:28 -- common/autotest_common.sh@1208 -- # local i=0 00:12:26.468 04:08:28 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:26.468 04:08:28 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.468 04:08:28 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.468 04:08:28 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:26.468 04:08:28 -- common/autotest_common.sh@1220 -- # return 0 00:12:26.468 04:08:28 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:12:26.468 04:08:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.468 04:08:28 -- common/autotest_common.sh@10 -- # set +x 00:12:26.468 04:08:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.468 04:08:28 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:26.468 04:08:28 -- common/autotest_common.sh@650 -- # local es=0 00:12:26.468 04:08:28 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:26.468 04:08:28 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:26.468 04:08:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:26.468 04:08:28 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:26.468 04:08:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:26.468 04:08:28 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:26.468 04:08:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:26.468 04:08:28 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:26.468 04:08:28 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:26.468 04:08:28 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:26.727 [2024-11-26 04:08:28.231602] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157' 00:12:26.727 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:26.727 could not add new controller: failed to write to nvme-fabrics device 00:12:26.727 04:08:28 -- common/autotest_common.sh@653 -- # es=1 00:12:26.727 04:08:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:26.727 04:08:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:26.727 04:08:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:26.727 04:08:28 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:26.727 04:08:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.727 04:08:28 -- common/autotest_common.sh@10 -- # set +x 00:12:26.727 04:08:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.727 04:08:28 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:26.727 04:08:28 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:26.727 04:08:28 -- common/autotest_common.sh@1187 -- # local i=0 00:12:26.727 04:08:28 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:26.727 04:08:28 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:26.727 04:08:28 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:29.259 04:08:30 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:29.259 04:08:30 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:29.259 04:08:30 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:29.259 04:08:30 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:29.259 04:08:30 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.259 04:08:30 -- common/autotest_common.sh@1197 -- # return 0 00:12:29.259 04:08:30 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:29.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.259 04:08:30 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:29.259 04:08:30 -- common/autotest_common.sh@1208 -- # local i=0 00:12:29.259 04:08:30 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:29.259 04:08:30 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.259 04:08:30 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:29.259 04:08:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.259 04:08:30 -- common/autotest_common.sh@1220 -- # return 0 00:12:29.259 04:08:30 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.259 04:08:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.259 04:08:30 -- common/autotest_common.sh@10 -- # set +x 00:12:29.259 04:08:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.259 04:08:30 -- target/rpc.sh@81 -- # seq 1 5 00:12:29.259 04:08:30 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:29.259 04:08:30 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.259 04:08:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.259 04:08:30 -- common/autotest_common.sh@10 -- # set +x 00:12:29.259 04:08:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.259 04:08:30 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.259 04:08:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.259 04:08:30 -- common/autotest_common.sh@10 -- # set +x 00:12:29.259 [2024-11-26 04:08:30.646136] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.259 04:08:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.259 04:08:30 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:29.259 04:08:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.259 04:08:30 -- common/autotest_common.sh@10 -- # set +x 00:12:29.259 04:08:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.259 04:08:30 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.259 04:08:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.259 04:08:30 -- common/autotest_common.sh@10 -- # set +x 00:12:29.259 04:08:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.259 04:08:30 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:29.259 04:08:30 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:29.259 04:08:30 -- common/autotest_common.sh@1187 -- # local i=0 00:12:29.259 04:08:30 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.259 04:08:30 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:29.259 04:08:30 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:31.163 04:08:32 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:31.163 04:08:32 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:31.163 04:08:32 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.163 04:08:32 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:31.163 04:08:32 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.163 04:08:32 -- common/autotest_common.sh@1197 -- # return 0 00:12:31.163 04:08:32 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.163 04:08:32 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:31.163 04:08:32 -- common/autotest_common.sh@1208 -- # local i=0 00:12:31.163 04:08:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.163 04:08:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:31.422 04:08:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:31.422 04:08:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.422 04:08:32 -- common/autotest_common.sh@1220 -- # return 0 00:12:31.422 04:08:32 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:31.422 04:08:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.422 04:08:32 -- common/autotest_common.sh@10 -- # set +x 00:12:31.422 04:08:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.422 04:08:32 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:31.422 04:08:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.422 04:08:32 -- common/autotest_common.sh@10 -- # set +x 00:12:31.422 04:08:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.422 04:08:32 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:31.422 04:08:32 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:31.422 04:08:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.422 04:08:32 -- common/autotest_common.sh@10 -- # set +x 00:12:31.422 04:08:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.422 04:08:32 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.422 04:08:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.422 04:08:32 -- common/autotest_common.sh@10 -- # set +x 00:12:31.422 [2024-11-26 04:08:32.966825] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.422 04:08:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.422 04:08:32 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:31.422 04:08:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.422 04:08:32 -- common/autotest_common.sh@10 -- # set +x 00:12:31.422 04:08:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.422 04:08:32 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:31.422 04:08:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.422 04:08:32 -- common/autotest_common.sh@10 -- # set +x 00:12:31.422 04:08:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.422 04:08:32 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.422 04:08:33 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:31.422 04:08:33 -- common/autotest_common.sh@1187 -- # local i=0 00:12:31.422 04:08:33 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.422 04:08:33 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:31.422 04:08:33 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:33.967 04:08:35 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:33.967 04:08:35 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:33.967 04:08:35 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.967 04:08:35 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:33.967 04:08:35 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.967 04:08:35 -- common/autotest_common.sh@1197 -- # return 0 00:12:33.967 04:08:35 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.967 04:08:35 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.967 04:08:35 -- common/autotest_common.sh@1208 -- # local i=0 00:12:33.967 04:08:35 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:33.967 04:08:35 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.967 04:08:35 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:33.967 04:08:35 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.967 04:08:35 -- common/autotest_common.sh@1220 -- # return 0 00:12:33.967 04:08:35 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:33.967 04:08:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.967 04:08:35 -- common/autotest_common.sh@10 -- # set +x 00:12:33.967 04:08:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.967 04:08:35 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.967 04:08:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.967 04:08:35 -- common/autotest_common.sh@10 -- # set +x 00:12:33.967 04:08:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.967 04:08:35 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:33.967 04:08:35 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:33.967 04:08:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.967 04:08:35 -- common/autotest_common.sh@10 -- # set +x 00:12:33.967 04:08:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.967 04:08:35 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.967 04:08:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.967 04:08:35 -- common/autotest_common.sh@10 -- # set +x 00:12:33.967 [2024-11-26 04:08:35.291354] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.967 04:08:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.967 04:08:35 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:33.967 04:08:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.967 04:08:35 -- common/autotest_common.sh@10 -- # set +x 00:12:33.967 04:08:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.967 04:08:35 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:33.967 04:08:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.967 04:08:35 -- common/autotest_common.sh@10 -- # set +x 00:12:33.967 04:08:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.967 04:08:35 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.967 04:08:35 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.967 04:08:35 -- common/autotest_common.sh@1187 -- # local i=0 00:12:33.967 04:08:35 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.967 04:08:35 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:33.967 04:08:35 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:35.908 04:08:37 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:35.908 04:08:37 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:35.908 04:08:37 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.908 04:08:37 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:35.908 04:08:37 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.908 04:08:37 -- common/autotest_common.sh@1197 -- # return 0 00:12:35.908 04:08:37 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.908 04:08:37 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.908 04:08:37 -- common/autotest_common.sh@1208 -- # local i=0 00:12:35.908 04:08:37 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:35.908 04:08:37 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.908 04:08:37 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.908 04:08:37 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:35.908 04:08:37 -- common/autotest_common.sh@1220 -- # return 0 00:12:35.909 04:08:37 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:35.909 04:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.909 04:08:37 -- common/autotest_common.sh@10 -- # set +x 00:12:35.909 04:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.909 04:08:37 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.909 04:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.909 04:08:37 -- common/autotest_common.sh@10 -- # set +x 00:12:35.909 04:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.909 04:08:37 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:35.909 04:08:37 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:35.909 04:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.909 04:08:37 -- common/autotest_common.sh@10 -- # set +x 00:12:35.909 04:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.909 04:08:37 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.909 04:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.909 04:08:37 -- common/autotest_common.sh@10 -- # set +x 00:12:35.909 [2024-11-26 04:08:37.608285] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.909 04:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.909 04:08:37 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:35.909 04:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.909 04:08:37 -- common/autotest_common.sh@10 -- # set +x 00:12:35.909 04:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.909 04:08:37 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:35.909 04:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.909 04:08:37 -- common/autotest_common.sh@10 -- # set +x 00:12:35.909 04:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.909 04:08:37 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.167 04:08:37 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.167 04:08:37 -- common/autotest_common.sh@1187 -- # local i=0 00:12:36.167 04:08:37 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.167 04:08:37 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:36.168 04:08:37 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:38.076 04:08:39 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:38.076 04:08:39 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:38.076 04:08:39 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.076 04:08:39 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:38.076 04:08:39 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.076 04:08:39 -- common/autotest_common.sh@1197 -- # return 0 00:12:38.076 04:08:39 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:38.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.335 04:08:39 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:38.335 04:08:39 -- common/autotest_common.sh@1208 -- # local i=0 00:12:38.335 04:08:39 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:38.335 04:08:39 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.335 04:08:39 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.335 04:08:39 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:38.336 04:08:39 -- common/autotest_common.sh@1220 -- # return 0 00:12:38.336 04:08:39 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:38.336 04:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.336 04:08:39 -- common/autotest_common.sh@10 -- # set +x 00:12:38.336 04:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.336 04:08:39 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.336 04:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.336 04:08:39 -- common/autotest_common.sh@10 -- # set +x 00:12:38.336 04:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.336 04:08:39 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:38.336 04:08:39 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:38.336 04:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.336 04:08:39 -- common/autotest_common.sh@10 -- # set +x 00:12:38.336 04:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.336 04:08:39 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.336 04:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.336 04:08:39 -- common/autotest_common.sh@10 -- # set +x 00:12:38.336 [2024-11-26 04:08:39.932496] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.336 04:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.336 04:08:39 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:38.336 04:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.336 04:08:39 -- common/autotest_common.sh@10 -- # set +x 00:12:38.336 04:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.336 04:08:39 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:38.336 04:08:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.336 04:08:39 -- common/autotest_common.sh@10 -- # set +x 00:12:38.336 04:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.336 04:08:39 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.595 04:08:40 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.595 04:08:40 -- common/autotest_common.sh@1187 -- # local i=0 00:12:38.595 04:08:40 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.595 04:08:40 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:38.595 04:08:40 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:40.501 04:08:42 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:40.501 04:08:42 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:40.501 04:08:42 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.501 04:08:42 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:40.501 04:08:42 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.501 04:08:42 -- common/autotest_common.sh@1197 -- # return 0 00:12:40.501 04:08:42 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.501 04:08:42 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.501 04:08:42 -- common/autotest_common.sh@1208 -- # local i=0 00:12:40.501 04:08:42 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:40.501 04:08:42 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.501 04:08:42 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.501 04:08:42 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:40.501 04:08:42 -- common/autotest_common.sh@1220 -- # return 0 00:12:40.501 04:08:42 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:40.501 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.501 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.501 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.501 04:08:42 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.501 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.501 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.501 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.501 04:08:42 -- target/rpc.sh@99 -- # seq 1 5 00:12:40.501 04:08:42 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:40.501 04:08:42 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.501 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.501 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.501 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.501 04:08:42 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.501 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.501 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.501 [2024-11-26 04:08:42.245446] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.501 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.501 04:08:42 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:40.501 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.501 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.501 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.502 04:08:42 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.502 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.502 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.761 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.761 04:08:42 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.761 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.761 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.761 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.761 04:08:42 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.761 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.761 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.761 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.761 04:08:42 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:40.761 04:08:42 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.761 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.761 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.761 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.761 04:08:42 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.761 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.761 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.761 [2024-11-26 04:08:42.293451] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.761 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.761 04:08:42 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:40.761 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.761 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.761 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.761 04:08:42 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.761 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.762 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.762 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.762 04:08:42 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.762 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.762 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.762 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.762 04:08:42 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.762 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.762 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.762 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.762 04:08:42 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:40.762 04:08:42 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.762 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.762 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.762 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.762 04:08:42 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.762 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.762 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.762 [2024-11-26 04:08:42.345536] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.762 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.762 04:08:42 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:40.762 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.762 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.762 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.762 04:08:42 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.762 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.762 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.762 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.762 04:08:42 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.762 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.762 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.762 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.762 04:08:42 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.762 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.762 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.762 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.762 04:08:42 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:40.762 04:08:42 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.762 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.762 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.762 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.762 04:08:42 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.762 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.762 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.762 [2024-11-26 04:08:42.393601] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.762 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.762 04:08:42 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:40.762 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.762 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.762 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.762 04:08:42 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.762 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.762 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.762 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.762 04:08:42 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.762 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.762 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.762 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.762 04:08:42 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.762 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.762 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.762 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.762 04:08:42 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:40.762 04:08:42 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.762 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.762 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.762 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.762 04:08:42 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.762 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.762 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.762 [2024-11-26 04:08:42.441652] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.762 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.762 04:08:42 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:40.762 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.762 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.762 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.762 04:08:42 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.762 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.762 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.762 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.762 04:08:42 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.762 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.762 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.762 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.762 04:08:42 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.762 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.762 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.762 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.762 04:08:42 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:40.762 04:08:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.762 04:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.762 04:08:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.762 04:08:42 -- target/rpc.sh@110 -- # stats='{ 00:12:40.762 "poll_groups": [ 00:12:40.762 { 00:12:40.762 "admin_qpairs": 2, 00:12:40.762 "completed_nvme_io": 163, 00:12:40.762 "current_admin_qpairs": 0, 00:12:40.762 "current_io_qpairs": 0, 00:12:40.762 "io_qpairs": 16, 00:12:40.762 "name": "nvmf_tgt_poll_group_0", 00:12:40.762 "pending_bdev_io": 0, 00:12:40.762 "transports": [ 00:12:40.762 { 00:12:40.762 "trtype": "TCP" 00:12:40.762 } 00:12:40.762 ] 00:12:40.762 }, 00:12:40.762 { 00:12:40.762 "admin_qpairs": 3, 00:12:40.762 "completed_nvme_io": 67, 00:12:40.762 "current_admin_qpairs": 0, 00:12:40.762 "current_io_qpairs": 0, 00:12:40.762 "io_qpairs": 17, 00:12:40.762 "name": "nvmf_tgt_poll_group_1", 00:12:40.762 "pending_bdev_io": 0, 00:12:40.762 "transports": [ 00:12:40.762 { 00:12:40.762 "trtype": "TCP" 00:12:40.762 } 00:12:40.762 ] 00:12:40.762 }, 00:12:40.762 { 00:12:40.762 "admin_qpairs": 1, 00:12:40.762 "completed_nvme_io": 70, 00:12:40.762 "current_admin_qpairs": 0, 00:12:40.762 "current_io_qpairs": 0, 00:12:40.762 "io_qpairs": 19, 00:12:40.762 "name": "nvmf_tgt_poll_group_2", 00:12:40.762 "pending_bdev_io": 0, 00:12:40.762 "transports": [ 00:12:40.762 { 00:12:40.762 "trtype": "TCP" 00:12:40.762 } 00:12:40.762 ] 00:12:40.762 }, 00:12:40.762 { 00:12:40.762 "admin_qpairs": 1, 00:12:40.762 "completed_nvme_io": 120, 00:12:40.762 "current_admin_qpairs": 0, 00:12:40.762 "current_io_qpairs": 0, 00:12:40.762 "io_qpairs": 18, 00:12:40.762 "name": "nvmf_tgt_poll_group_3", 00:12:40.762 "pending_bdev_io": 0, 00:12:40.762 "transports": [ 00:12:40.762 { 00:12:40.762 "trtype": "TCP" 00:12:40.762 } 00:12:40.762 ] 00:12:40.762 } 00:12:40.762 ], 00:12:40.762 "tick_rate": 2200000000 00:12:40.762 }' 00:12:40.762 04:08:42 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:40.762 04:08:42 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:40.762 04:08:42 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:40.762 04:08:42 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:41.022 04:08:42 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:41.022 04:08:42 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:41.022 04:08:42 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:41.022 04:08:42 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:41.022 04:08:42 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:41.022 04:08:42 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:41.022 04:08:42 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:41.022 04:08:42 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:41.022 04:08:42 -- target/rpc.sh@123 -- # nvmftestfini 00:12:41.022 04:08:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:41.022 04:08:42 -- nvmf/common.sh@116 -- # sync 00:12:41.022 04:08:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:41.022 04:08:42 -- nvmf/common.sh@119 -- # set +e 00:12:41.022 04:08:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:41.022 04:08:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:41.022 rmmod nvme_tcp 00:12:41.022 rmmod nvme_fabrics 00:12:41.022 rmmod nvme_keyring 00:12:41.022 04:08:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:41.022 04:08:42 -- nvmf/common.sh@123 -- # set -e 00:12:41.022 04:08:42 -- nvmf/common.sh@124 -- # return 0 00:12:41.022 04:08:42 -- nvmf/common.sh@477 -- # '[' -n 78097 ']' 00:12:41.022 04:08:42 -- nvmf/common.sh@478 -- # killprocess 78097 00:12:41.022 04:08:42 -- common/autotest_common.sh@936 -- # '[' -z 78097 ']' 00:12:41.022 04:08:42 -- common/autotest_common.sh@940 -- # kill -0 78097 00:12:41.022 04:08:42 -- common/autotest_common.sh@941 -- # uname 00:12:41.022 04:08:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:41.022 04:08:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78097 00:12:41.022 04:08:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:41.022 04:08:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:41.022 04:08:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78097' 00:12:41.022 killing process with pid 78097 00:12:41.022 04:08:42 -- common/autotest_common.sh@955 -- # kill 78097 00:12:41.022 04:08:42 -- common/autotest_common.sh@960 -- # wait 78097 00:12:41.281 04:08:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:41.281 04:08:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:41.281 04:08:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:41.281 04:08:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:41.281 04:08:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:41.281 04:08:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.281 04:08:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.281 04:08:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.539 04:08:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:41.539 00:12:41.539 real 0m19.151s 00:12:41.539 user 1m12.297s 00:12:41.539 sys 0m2.062s 00:12:41.539 04:08:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:41.539 04:08:43 -- common/autotest_common.sh@10 -- # set +x 00:12:41.539 ************************************ 00:12:41.539 END TEST nvmf_rpc 00:12:41.539 ************************************ 00:12:41.539 04:08:43 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:41.539 04:08:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:41.540 04:08:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:41.540 04:08:43 -- common/autotest_common.sh@10 -- # set +x 00:12:41.540 ************************************ 00:12:41.540 START TEST nvmf_invalid 00:12:41.540 ************************************ 00:12:41.540 04:08:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:41.540 * Looking for test storage... 00:12:41.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:41.540 04:08:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:41.540 04:08:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:41.540 04:08:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:41.540 04:08:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:41.540 04:08:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:41.540 04:08:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:41.540 04:08:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:41.540 04:08:43 -- scripts/common.sh@335 -- # IFS=.-: 00:12:41.540 04:08:43 -- scripts/common.sh@335 -- # read -ra ver1 00:12:41.540 04:08:43 -- scripts/common.sh@336 -- # IFS=.-: 00:12:41.540 04:08:43 -- scripts/common.sh@336 -- # read -ra ver2 00:12:41.540 04:08:43 -- scripts/common.sh@337 -- # local 'op=<' 00:12:41.540 04:08:43 -- scripts/common.sh@339 -- # ver1_l=2 00:12:41.540 04:08:43 -- scripts/common.sh@340 -- # ver2_l=1 00:12:41.540 04:08:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:41.540 04:08:43 -- scripts/common.sh@343 -- # case "$op" in 00:12:41.540 04:08:43 -- scripts/common.sh@344 -- # : 1 00:12:41.540 04:08:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:41.540 04:08:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.540 04:08:43 -- scripts/common.sh@364 -- # decimal 1 00:12:41.540 04:08:43 -- scripts/common.sh@352 -- # local d=1 00:12:41.540 04:08:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:41.540 04:08:43 -- scripts/common.sh@354 -- # echo 1 00:12:41.540 04:08:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:41.540 04:08:43 -- scripts/common.sh@365 -- # decimal 2 00:12:41.540 04:08:43 -- scripts/common.sh@352 -- # local d=2 00:12:41.540 04:08:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:41.540 04:08:43 -- scripts/common.sh@354 -- # echo 2 00:12:41.540 04:08:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:41.540 04:08:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:41.540 04:08:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:41.540 04:08:43 -- scripts/common.sh@367 -- # return 0 00:12:41.540 04:08:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:41.540 04:08:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:41.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.540 --rc genhtml_branch_coverage=1 00:12:41.540 --rc genhtml_function_coverage=1 00:12:41.540 --rc genhtml_legend=1 00:12:41.540 --rc geninfo_all_blocks=1 00:12:41.540 --rc geninfo_unexecuted_blocks=1 00:12:41.540 00:12:41.540 ' 00:12:41.540 04:08:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:41.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.540 --rc genhtml_branch_coverage=1 00:12:41.540 --rc genhtml_function_coverage=1 00:12:41.540 --rc genhtml_legend=1 00:12:41.540 --rc geninfo_all_blocks=1 00:12:41.540 --rc geninfo_unexecuted_blocks=1 00:12:41.540 00:12:41.540 ' 00:12:41.540 04:08:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:41.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.540 --rc genhtml_branch_coverage=1 00:12:41.540 --rc genhtml_function_coverage=1 00:12:41.540 --rc genhtml_legend=1 00:12:41.540 --rc geninfo_all_blocks=1 00:12:41.540 --rc geninfo_unexecuted_blocks=1 00:12:41.540 00:12:41.540 ' 00:12:41.540 04:08:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:41.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.540 --rc genhtml_branch_coverage=1 00:12:41.540 --rc genhtml_function_coverage=1 00:12:41.540 --rc genhtml_legend=1 00:12:41.540 --rc geninfo_all_blocks=1 00:12:41.540 --rc geninfo_unexecuted_blocks=1 00:12:41.540 00:12:41.540 ' 00:12:41.540 04:08:43 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:41.540 04:08:43 -- nvmf/common.sh@7 -- # uname -s 00:12:41.540 04:08:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.540 04:08:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.540 04:08:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.540 04:08:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.540 04:08:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.540 04:08:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.540 04:08:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.540 04:08:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.540 04:08:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.540 04:08:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.800 04:08:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:12:41.800 04:08:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:12:41.800 04:08:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.800 04:08:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.800 04:08:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:41.800 04:08:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:41.800 04:08:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.800 04:08:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.800 04:08:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.800 04:08:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.800 04:08:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.800 04:08:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.800 04:08:43 -- paths/export.sh@5 -- # export PATH 00:12:41.800 04:08:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.800 04:08:43 -- nvmf/common.sh@46 -- # : 0 00:12:41.800 04:08:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:41.800 04:08:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:41.800 04:08:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:41.800 04:08:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.800 04:08:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.800 04:08:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:41.800 04:08:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:41.800 04:08:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:41.800 04:08:43 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:41.800 04:08:43 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:41.800 04:08:43 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:41.800 04:08:43 -- target/invalid.sh@14 -- # target=foobar 00:12:41.800 04:08:43 -- target/invalid.sh@16 -- # RANDOM=0 00:12:41.800 04:08:43 -- target/invalid.sh@34 -- # nvmftestinit 00:12:41.800 04:08:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:41.800 04:08:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.800 04:08:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:41.800 04:08:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:41.800 04:08:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:41.800 04:08:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.800 04:08:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.800 04:08:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.800 04:08:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:41.800 04:08:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:41.800 04:08:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:41.800 04:08:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:41.800 04:08:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:41.800 04:08:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:41.800 04:08:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.800 04:08:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.800 04:08:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:41.800 04:08:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:41.800 04:08:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:41.800 04:08:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:41.800 04:08:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:41.800 04:08:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.800 04:08:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:41.800 04:08:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:41.800 04:08:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:41.800 04:08:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:41.800 04:08:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:41.800 04:08:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:41.800 Cannot find device "nvmf_tgt_br" 00:12:41.800 04:08:43 -- nvmf/common.sh@154 -- # true 00:12:41.800 04:08:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:41.800 Cannot find device "nvmf_tgt_br2" 00:12:41.800 04:08:43 -- nvmf/common.sh@155 -- # true 00:12:41.800 04:08:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:41.800 04:08:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:41.800 Cannot find device "nvmf_tgt_br" 00:12:41.800 04:08:43 -- nvmf/common.sh@157 -- # true 00:12:41.800 04:08:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:41.800 Cannot find device "nvmf_tgt_br2" 00:12:41.800 04:08:43 -- nvmf/common.sh@158 -- # true 00:12:41.800 04:08:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:41.800 04:08:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:41.800 04:08:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:41.800 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:41.800 04:08:43 -- nvmf/common.sh@161 -- # true 00:12:41.800 04:08:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:41.800 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:41.800 04:08:43 -- nvmf/common.sh@162 -- # true 00:12:41.800 04:08:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:41.800 04:08:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:41.800 04:08:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:41.800 04:08:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:41.800 04:08:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:41.800 04:08:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:41.800 04:08:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:41.800 04:08:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:41.800 04:08:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:41.800 04:08:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:41.800 04:08:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:41.800 04:08:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:42.060 04:08:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:42.060 04:08:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:42.060 04:08:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:42.060 04:08:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:42.060 04:08:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:42.060 04:08:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:42.060 04:08:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:42.060 04:08:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:42.060 04:08:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:42.060 04:08:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:42.060 04:08:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:42.060 04:08:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:42.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:12:42.060 00:12:42.060 --- 10.0.0.2 ping statistics --- 00:12:42.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.060 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:12:42.060 04:08:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:42.060 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:42.060 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:12:42.060 00:12:42.060 --- 10.0.0.3 ping statistics --- 00:12:42.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.060 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:12:42.060 04:08:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:42.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:12:42.060 00:12:42.060 --- 10.0.0.1 ping statistics --- 00:12:42.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.060 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:12:42.060 04:08:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.060 04:08:43 -- nvmf/common.sh@421 -- # return 0 00:12:42.060 04:08:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:42.060 04:08:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.060 04:08:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:42.060 04:08:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:42.060 04:08:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.060 04:08:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:42.060 04:08:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:42.060 04:08:43 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:42.060 04:08:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:42.060 04:08:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:42.060 04:08:43 -- common/autotest_common.sh@10 -- # set +x 00:12:42.060 04:08:43 -- nvmf/common.sh@469 -- # nvmfpid=78619 00:12:42.060 04:08:43 -- nvmf/common.sh@470 -- # waitforlisten 78619 00:12:42.060 04:08:43 -- common/autotest_common.sh@829 -- # '[' -z 78619 ']' 00:12:42.060 04:08:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:42.060 04:08:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.060 04:08:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:42.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.060 04:08:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.060 04:08:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:42.060 04:08:43 -- common/autotest_common.sh@10 -- # set +x 00:12:42.060 [2024-11-26 04:08:43.750404] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:42.060 [2024-11-26 04:08:43.750487] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.319 [2024-11-26 04:08:43.894576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:42.319 [2024-11-26 04:08:43.981150] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:42.319 [2024-11-26 04:08:43.981344] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.319 [2024-11-26 04:08:43.981362] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.319 [2024-11-26 04:08:43.981374] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.319 [2024-11-26 04:08:43.981538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.319 [2024-11-26 04:08:43.981681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.319 [2024-11-26 04:08:43.982570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:42.319 [2024-11-26 04:08:43.982605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.256 04:08:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:43.256 04:08:44 -- common/autotest_common.sh@862 -- # return 0 00:12:43.256 04:08:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:43.256 04:08:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:43.256 04:08:44 -- common/autotest_common.sh@10 -- # set +x 00:12:43.256 04:08:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.256 04:08:44 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:43.256 04:08:44 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode2781 00:12:43.514 [2024-11-26 04:08:45.076561] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:43.514 04:08:45 -- target/invalid.sh@40 -- # out='2024/11/26 04:08:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode2781 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:43.514 request: 00:12:43.514 { 00:12:43.514 "method": "nvmf_create_subsystem", 00:12:43.514 "params": { 00:12:43.514 "nqn": "nqn.2016-06.io.spdk:cnode2781", 00:12:43.514 "tgt_name": "foobar" 00:12:43.514 } 00:12:43.514 } 00:12:43.514 Got JSON-RPC error response 00:12:43.514 GoRPCClient: error on JSON-RPC call' 00:12:43.514 04:08:45 -- target/invalid.sh@41 -- # [[ 2024/11/26 04:08:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode2781 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:43.514 request: 00:12:43.514 { 00:12:43.514 "method": "nvmf_create_subsystem", 00:12:43.515 "params": { 00:12:43.515 "nqn": "nqn.2016-06.io.spdk:cnode2781", 00:12:43.515 "tgt_name": "foobar" 00:12:43.515 } 00:12:43.515 } 00:12:43.515 Got JSON-RPC error response 00:12:43.515 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:43.515 04:08:45 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:43.515 04:08:45 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode12519 00:12:43.774 [2024-11-26 04:08:45.372951] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12519: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:43.774 04:08:45 -- target/invalid.sh@45 -- # out='2024/11/26 04:08:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12519 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:43.774 request: 00:12:43.774 { 00:12:43.774 "method": "nvmf_create_subsystem", 00:12:43.774 "params": { 00:12:43.774 "nqn": "nqn.2016-06.io.spdk:cnode12519", 00:12:43.774 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:43.774 } 00:12:43.774 } 00:12:43.774 Got JSON-RPC error response 00:12:43.774 GoRPCClient: error on JSON-RPC call' 00:12:43.774 04:08:45 -- target/invalid.sh@46 -- # [[ 2024/11/26 04:08:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12519 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:43.774 request: 00:12:43.774 { 00:12:43.774 "method": "nvmf_create_subsystem", 00:12:43.774 "params": { 00:12:43.774 "nqn": "nqn.2016-06.io.spdk:cnode12519", 00:12:43.774 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:43.774 } 00:12:43.774 } 00:12:43.774 Got JSON-RPC error response 00:12:43.774 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:43.774 04:08:45 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:43.774 04:08:45 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13532 00:12:44.034 [2024-11-26 04:08:45.597324] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13532: invalid model number 'SPDK_Controller' 00:12:44.034 04:08:45 -- target/invalid.sh@50 -- # out='2024/11/26 04:08:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode13532], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:44.034 request: 00:12:44.034 { 00:12:44.034 "method": "nvmf_create_subsystem", 00:12:44.034 "params": { 00:12:44.034 "nqn": "nqn.2016-06.io.spdk:cnode13532", 00:12:44.034 "model_number": "SPDK_Controller\u001f" 00:12:44.034 } 00:12:44.034 } 00:12:44.034 Got JSON-RPC error response 00:12:44.034 GoRPCClient: error on JSON-RPC call' 00:12:44.034 04:08:45 -- target/invalid.sh@51 -- # [[ 2024/11/26 04:08:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode13532], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:44.034 request: 00:12:44.034 { 00:12:44.034 "method": "nvmf_create_subsystem", 00:12:44.034 "params": { 00:12:44.034 "nqn": "nqn.2016-06.io.spdk:cnode13532", 00:12:44.034 "model_number": "SPDK_Controller\u001f" 00:12:44.034 } 00:12:44.034 } 00:12:44.034 Got JSON-RPC error response 00:12:44.034 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:44.034 04:08:45 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:44.034 04:08:45 -- target/invalid.sh@19 -- # local length=21 ll 00:12:44.034 04:08:45 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:44.034 04:08:45 -- target/invalid.sh@21 -- # local chars 00:12:44.034 04:08:45 -- target/invalid.sh@22 -- # local string 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # printf %x 50 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # string+=2 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # printf %x 53 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # string+=5 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # printf %x 44 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # string+=, 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # printf %x 56 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # string+=8 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # printf %x 85 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # string+=U 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # printf %x 37 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # string+=% 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # printf %x 86 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # string+=V 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # printf %x 83 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # string+=S 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # printf %x 75 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # string+=K 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # printf %x 117 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # string+=u 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # printf %x 56 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # string+=8 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # printf %x 111 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # string+=o 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # printf %x 81 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # string+=Q 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # printf %x 108 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # string+=l 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # printf %x 36 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # string+='$' 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # printf %x 124 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # string+='|' 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # printf %x 72 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # string+=H 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # printf %x 56 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # string+=8 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # printf %x 63 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # string+='?' 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # printf %x 75 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # string+=K 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # printf %x 76 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:44.034 04:08:45 -- target/invalid.sh@25 -- # string+=L 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.034 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.034 04:08:45 -- target/invalid.sh@28 -- # [[ 2 == \- ]] 00:12:44.034 04:08:45 -- target/invalid.sh@31 -- # echo '25,8U%VSKu8oQl$|H8?KL' 00:12:44.034 04:08:45 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '25,8U%VSKu8oQl$|H8?KL' nqn.2016-06.io.spdk:cnode22282 00:12:44.294 [2024-11-26 04:08:45.937839] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22282: invalid serial number '25,8U%VSKu8oQl$|H8?KL' 00:12:44.294 04:08:45 -- target/invalid.sh@54 -- # out='2024/11/26 04:08:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode22282 serial_number:25,8U%VSKu8oQl$|H8?KL], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 25,8U%VSKu8oQl$|H8?KL 00:12:44.294 request: 00:12:44.294 { 00:12:44.294 "method": "nvmf_create_subsystem", 00:12:44.294 "params": { 00:12:44.294 "nqn": "nqn.2016-06.io.spdk:cnode22282", 00:12:44.294 "serial_number": "25,8U%VSKu8oQl$|H8?KL" 00:12:44.294 } 00:12:44.294 } 00:12:44.294 Got JSON-RPC error response 00:12:44.294 GoRPCClient: error on JSON-RPC call' 00:12:44.294 04:08:45 -- target/invalid.sh@55 -- # [[ 2024/11/26 04:08:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode22282 serial_number:25,8U%VSKu8oQl$|H8?KL], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 25,8U%VSKu8oQl$|H8?KL 00:12:44.294 request: 00:12:44.294 { 00:12:44.294 "method": "nvmf_create_subsystem", 00:12:44.294 "params": { 00:12:44.294 "nqn": "nqn.2016-06.io.spdk:cnode22282", 00:12:44.294 "serial_number": "25,8U%VSKu8oQl$|H8?KL" 00:12:44.294 } 00:12:44.294 } 00:12:44.294 Got JSON-RPC error response 00:12:44.294 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:44.294 04:08:45 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:44.294 04:08:45 -- target/invalid.sh@19 -- # local length=41 ll 00:12:44.294 04:08:45 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:44.294 04:08:45 -- target/invalid.sh@21 -- # local chars 00:12:44.294 04:08:45 -- target/invalid.sh@22 -- # local string 00:12:44.294 04:08:45 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:44.294 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.294 04:08:45 -- target/invalid.sh@25 -- # printf %x 41 00:12:44.294 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:44.294 04:08:45 -- target/invalid.sh@25 -- # string+=')' 00:12:44.294 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.294 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.294 04:08:45 -- target/invalid.sh@25 -- # printf %x 39 00:12:44.294 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:44.294 04:08:45 -- target/invalid.sh@25 -- # string+=\' 00:12:44.294 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.294 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.294 04:08:45 -- target/invalid.sh@25 -- # printf %x 74 00:12:44.294 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:44.294 04:08:45 -- target/invalid.sh@25 -- # string+=J 00:12:44.294 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.294 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.294 04:08:45 -- target/invalid.sh@25 -- # printf %x 69 00:12:44.294 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:44.294 04:08:45 -- target/invalid.sh@25 -- # string+=E 00:12:44.294 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.294 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.294 04:08:45 -- target/invalid.sh@25 -- # printf %x 99 00:12:44.294 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:44.294 04:08:45 -- target/invalid.sh@25 -- # string+=c 00:12:44.294 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.294 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.294 04:08:45 -- target/invalid.sh@25 -- # printf %x 58 00:12:44.294 04:08:45 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:44.294 04:08:45 -- target/invalid.sh@25 -- # string+=: 00:12:44.294 04:08:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.294 04:08:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # printf %x 107 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # string+=k 00:12:44.294 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.294 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # printf %x 76 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # string+=L 00:12:44.294 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.294 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # printf %x 58 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # string+=: 00:12:44.294 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.294 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # printf %x 60 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # string+='<' 00:12:44.294 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.294 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # printf %x 110 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # string+=n 00:12:44.294 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.294 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # printf %x 104 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # string+=h 00:12:44.294 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.294 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # printf %x 79 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # string+=O 00:12:44.294 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.294 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # printf %x 63 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # string+='?' 00:12:44.294 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.294 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # printf %x 124 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # string+='|' 00:12:44.294 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.294 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # printf %x 109 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # string+=m 00:12:44.294 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.294 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # printf %x 54 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:44.294 04:08:46 -- target/invalid.sh@25 -- # string+=6 00:12:44.294 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.294 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # printf %x 73 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # string+=I 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # printf %x 59 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # string+=';' 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # printf %x 94 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # string+='^' 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # printf %x 116 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # string+=t 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # printf %x 111 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # string+=o 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # printf %x 115 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # string+=s 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # printf %x 75 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # string+=K 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # printf %x 64 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # string+=@ 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # printf %x 58 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # string+=: 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # printf %x 64 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # string+=@ 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # printf %x 42 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # string+='*' 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # printf %x 100 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # string+=d 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # printf %x 40 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # string+='(' 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # printf %x 74 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # string+=J 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # printf %x 94 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # string+='^' 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # printf %x 91 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # string+='[' 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # printf %x 56 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # string+=8 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # printf %x 39 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # string+=\' 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # printf %x 59 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:44.554 04:08:46 -- target/invalid.sh@25 -- # string+=';' 00:12:44.554 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.555 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.555 04:08:46 -- target/invalid.sh@25 -- # printf %x 86 00:12:44.555 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:44.555 04:08:46 -- target/invalid.sh@25 -- # string+=V 00:12:44.555 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.555 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.555 04:08:46 -- target/invalid.sh@25 -- # printf %x 33 00:12:44.555 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:44.555 04:08:46 -- target/invalid.sh@25 -- # string+='!' 00:12:44.555 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.555 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.555 04:08:46 -- target/invalid.sh@25 -- # printf %x 94 00:12:44.555 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:44.555 04:08:46 -- target/invalid.sh@25 -- # string+='^' 00:12:44.555 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.555 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.555 04:08:46 -- target/invalid.sh@25 -- # printf %x 91 00:12:44.555 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:44.555 04:08:46 -- target/invalid.sh@25 -- # string+='[' 00:12:44.555 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.555 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.555 04:08:46 -- target/invalid.sh@25 -- # printf %x 67 00:12:44.555 04:08:46 -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:44.555 04:08:46 -- target/invalid.sh@25 -- # string+=C 00:12:44.555 04:08:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.555 04:08:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.555 04:08:46 -- target/invalid.sh@28 -- # [[ ) == \- ]] 00:12:44.555 04:08:46 -- target/invalid.sh@31 -- # echo ')'\''JEc:kL: /dev/null' 00:12:47.411 04:08:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.720 04:08:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:47.720 ************************************ 00:12:47.720 END TEST nvmf_invalid 00:12:47.720 ************************************ 00:12:47.720 00:12:47.720 real 0m6.057s 00:12:47.720 user 0m23.991s 00:12:47.720 sys 0m1.350s 00:12:47.720 04:08:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:47.720 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:12:47.720 04:08:49 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:47.720 04:08:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:47.720 04:08:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:47.720 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:12:47.720 ************************************ 00:12:47.720 START TEST nvmf_abort 00:12:47.720 ************************************ 00:12:47.720 04:08:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:47.720 * Looking for test storage... 00:12:47.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:47.720 04:08:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:47.720 04:08:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:47.720 04:08:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:47.720 04:08:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:47.720 04:08:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:47.720 04:08:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:47.720 04:08:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:47.720 04:08:49 -- scripts/common.sh@335 -- # IFS=.-: 00:12:47.720 04:08:49 -- scripts/common.sh@335 -- # read -ra ver1 00:12:47.720 04:08:49 -- scripts/common.sh@336 -- # IFS=.-: 00:12:47.720 04:08:49 -- scripts/common.sh@336 -- # read -ra ver2 00:12:47.720 04:08:49 -- scripts/common.sh@337 -- # local 'op=<' 00:12:47.720 04:08:49 -- scripts/common.sh@339 -- # ver1_l=2 00:12:47.720 04:08:49 -- scripts/common.sh@340 -- # ver2_l=1 00:12:47.720 04:08:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:47.720 04:08:49 -- scripts/common.sh@343 -- # case "$op" in 00:12:47.720 04:08:49 -- scripts/common.sh@344 -- # : 1 00:12:47.720 04:08:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:47.720 04:08:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:47.720 04:08:49 -- scripts/common.sh@364 -- # decimal 1 00:12:47.720 04:08:49 -- scripts/common.sh@352 -- # local d=1 00:12:47.720 04:08:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:47.720 04:08:49 -- scripts/common.sh@354 -- # echo 1 00:12:47.720 04:08:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:47.720 04:08:49 -- scripts/common.sh@365 -- # decimal 2 00:12:47.720 04:08:49 -- scripts/common.sh@352 -- # local d=2 00:12:47.720 04:08:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:47.720 04:08:49 -- scripts/common.sh@354 -- # echo 2 00:12:47.720 04:08:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:47.720 04:08:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:47.720 04:08:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:47.720 04:08:49 -- scripts/common.sh@367 -- # return 0 00:12:47.720 04:08:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:47.720 04:08:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:47.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.720 --rc genhtml_branch_coverage=1 00:12:47.720 --rc genhtml_function_coverage=1 00:12:47.720 --rc genhtml_legend=1 00:12:47.720 --rc geninfo_all_blocks=1 00:12:47.720 --rc geninfo_unexecuted_blocks=1 00:12:47.720 00:12:47.720 ' 00:12:47.720 04:08:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:47.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.720 --rc genhtml_branch_coverage=1 00:12:47.720 --rc genhtml_function_coverage=1 00:12:47.720 --rc genhtml_legend=1 00:12:47.720 --rc geninfo_all_blocks=1 00:12:47.720 --rc geninfo_unexecuted_blocks=1 00:12:47.720 00:12:47.720 ' 00:12:47.720 04:08:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:47.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.720 --rc genhtml_branch_coverage=1 00:12:47.720 --rc genhtml_function_coverage=1 00:12:47.720 --rc genhtml_legend=1 00:12:47.720 --rc geninfo_all_blocks=1 00:12:47.720 --rc geninfo_unexecuted_blocks=1 00:12:47.720 00:12:47.720 ' 00:12:47.720 04:08:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:47.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.720 --rc genhtml_branch_coverage=1 00:12:47.720 --rc genhtml_function_coverage=1 00:12:47.720 --rc genhtml_legend=1 00:12:47.720 --rc geninfo_all_blocks=1 00:12:47.720 --rc geninfo_unexecuted_blocks=1 00:12:47.720 00:12:47.720 ' 00:12:47.720 04:08:49 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:47.720 04:08:49 -- nvmf/common.sh@7 -- # uname -s 00:12:47.720 04:08:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.720 04:08:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.720 04:08:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.721 04:08:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.721 04:08:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.721 04:08:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.721 04:08:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.721 04:08:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.721 04:08:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.721 04:08:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.721 04:08:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:12:47.721 04:08:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:12:47.721 04:08:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.721 04:08:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.721 04:08:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:47.721 04:08:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:47.721 04:08:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.721 04:08:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.721 04:08:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.721 04:08:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.721 04:08:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.721 04:08:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.721 04:08:49 -- paths/export.sh@5 -- # export PATH 00:12:47.721 04:08:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.721 04:08:49 -- nvmf/common.sh@46 -- # : 0 00:12:47.721 04:08:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:47.721 04:08:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:47.721 04:08:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:47.721 04:08:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.721 04:08:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.721 04:08:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:47.721 04:08:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:47.721 04:08:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:47.721 04:08:49 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:47.721 04:08:49 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:47.721 04:08:49 -- target/abort.sh@14 -- # nvmftestinit 00:12:47.721 04:08:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:47.721 04:08:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.721 04:08:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:47.721 04:08:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:47.721 04:08:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:47.721 04:08:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.721 04:08:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:47.721 04:08:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.721 04:08:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:47.721 04:08:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:47.721 04:08:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:47.721 04:08:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:47.721 04:08:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:47.721 04:08:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:47.721 04:08:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.721 04:08:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.721 04:08:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:47.721 04:08:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:47.721 04:08:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:47.721 04:08:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:47.721 04:08:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:47.721 04:08:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.721 04:08:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:47.721 04:08:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:47.721 04:08:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:47.721 04:08:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:47.721 04:08:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:47.980 04:08:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:47.980 Cannot find device "nvmf_tgt_br" 00:12:47.980 04:08:49 -- nvmf/common.sh@154 -- # true 00:12:47.980 04:08:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:47.980 Cannot find device "nvmf_tgt_br2" 00:12:47.980 04:08:49 -- nvmf/common.sh@155 -- # true 00:12:47.980 04:08:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:47.980 04:08:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:47.980 Cannot find device "nvmf_tgt_br" 00:12:47.980 04:08:49 -- nvmf/common.sh@157 -- # true 00:12:47.980 04:08:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:47.980 Cannot find device "nvmf_tgt_br2" 00:12:47.980 04:08:49 -- nvmf/common.sh@158 -- # true 00:12:47.980 04:08:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:47.980 04:08:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:47.980 04:08:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:47.980 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:47.980 04:08:49 -- nvmf/common.sh@161 -- # true 00:12:47.980 04:08:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:47.980 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:47.980 04:08:49 -- nvmf/common.sh@162 -- # true 00:12:47.980 04:08:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:47.980 04:08:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:47.980 04:08:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:47.980 04:08:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:47.980 04:08:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:47.980 04:08:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:47.980 04:08:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:47.980 04:08:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:47.980 04:08:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:47.980 04:08:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:47.980 04:08:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:47.980 04:08:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:47.980 04:08:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:47.980 04:08:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:47.980 04:08:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:47.980 04:08:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:47.980 04:08:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:47.980 04:08:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:47.980 04:08:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:47.980 04:08:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:47.980 04:08:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:47.980 04:08:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:47.980 04:08:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:47.980 04:08:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:47.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:12:47.980 00:12:47.980 --- 10.0.0.2 ping statistics --- 00:12:47.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.980 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:12:47.980 04:08:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:47.980 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:47.980 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:12:47.980 00:12:47.980 --- 10.0.0.3 ping statistics --- 00:12:47.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.980 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:12:47.980 04:08:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:47.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:12:47.980 00:12:47.980 --- 10.0.0.1 ping statistics --- 00:12:47.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.980 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:12:47.980 04:08:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.980 04:08:49 -- nvmf/common.sh@421 -- # return 0 00:12:47.980 04:08:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:47.980 04:08:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.980 04:08:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:47.980 04:08:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:47.980 04:08:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.980 04:08:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:47.980 04:08:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:47.980 04:08:49 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:47.980 04:08:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:47.980 04:08:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:47.980 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:12:48.238 04:08:49 -- nvmf/common.sh@469 -- # nvmfpid=79127 00:12:48.238 04:08:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:48.238 04:08:49 -- nvmf/common.sh@470 -- # waitforlisten 79127 00:12:48.239 04:08:49 -- common/autotest_common.sh@829 -- # '[' -z 79127 ']' 00:12:48.239 04:08:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.239 04:08:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:48.239 04:08:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.239 04:08:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:48.239 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:12:48.239 [2024-11-26 04:08:49.799928] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:48.239 [2024-11-26 04:08:49.800169] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.239 [2024-11-26 04:08:49.937959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:48.239 [2024-11-26 04:08:49.998529] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:48.239 [2024-11-26 04:08:49.998950] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.239 [2024-11-26 04:08:49.998970] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.239 [2024-11-26 04:08:49.998980] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.497 [2024-11-26 04:08:49.999134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.497 [2024-11-26 04:08:49.999373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:48.497 [2024-11-26 04:08:49.999388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.066 04:08:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:49.066 04:08:50 -- common/autotest_common.sh@862 -- # return 0 00:12:49.066 04:08:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:49.066 04:08:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:49.066 04:08:50 -- common/autotest_common.sh@10 -- # set +x 00:12:49.325 04:08:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.325 04:08:50 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:49.325 04:08:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.325 04:08:50 -- common/autotest_common.sh@10 -- # set +x 00:12:49.325 [2024-11-26 04:08:50.870032] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.325 04:08:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.325 04:08:50 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:49.325 04:08:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.325 04:08:50 -- common/autotest_common.sh@10 -- # set +x 00:12:49.325 Malloc0 00:12:49.325 04:08:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.325 04:08:50 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:49.325 04:08:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.325 04:08:50 -- common/autotest_common.sh@10 -- # set +x 00:12:49.325 Delay0 00:12:49.325 04:08:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.325 04:08:50 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:49.325 04:08:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.325 04:08:50 -- common/autotest_common.sh@10 -- # set +x 00:12:49.325 04:08:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.325 04:08:50 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:49.325 04:08:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.325 04:08:50 -- common/autotest_common.sh@10 -- # set +x 00:12:49.325 04:08:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.325 04:08:50 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:49.325 04:08:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.325 04:08:50 -- common/autotest_common.sh@10 -- # set +x 00:12:49.325 [2024-11-26 04:08:50.948102] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.325 04:08:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.325 04:08:50 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:49.325 04:08:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.325 04:08:50 -- common/autotest_common.sh@10 -- # set +x 00:12:49.325 04:08:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.325 04:08:50 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:49.584 [2024-11-26 04:08:51.124194] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:51.489 Initializing NVMe Controllers 00:12:51.489 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:51.489 controller IO queue size 128 less than required 00:12:51.489 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:51.489 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:51.489 Initialization complete. Launching workers. 00:12:51.489 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 40513 00:12:51.489 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 40574, failed to submit 62 00:12:51.489 success 40513, unsuccess 61, failed 0 00:12:51.489 04:08:53 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:51.489 04:08:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.489 04:08:53 -- common/autotest_common.sh@10 -- # set +x 00:12:51.489 04:08:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.489 04:08:53 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:51.489 04:08:53 -- target/abort.sh@38 -- # nvmftestfini 00:12:51.489 04:08:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:51.489 04:08:53 -- nvmf/common.sh@116 -- # sync 00:12:51.489 04:08:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:51.489 04:08:53 -- nvmf/common.sh@119 -- # set +e 00:12:51.489 04:08:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:51.489 04:08:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:51.489 rmmod nvme_tcp 00:12:51.489 rmmod nvme_fabrics 00:12:51.748 rmmod nvme_keyring 00:12:51.748 04:08:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:51.748 04:08:53 -- nvmf/common.sh@123 -- # set -e 00:12:51.748 04:08:53 -- nvmf/common.sh@124 -- # return 0 00:12:51.748 04:08:53 -- nvmf/common.sh@477 -- # '[' -n 79127 ']' 00:12:51.748 04:08:53 -- nvmf/common.sh@478 -- # killprocess 79127 00:12:51.748 04:08:53 -- common/autotest_common.sh@936 -- # '[' -z 79127 ']' 00:12:51.748 04:08:53 -- common/autotest_common.sh@940 -- # kill -0 79127 00:12:51.748 04:08:53 -- common/autotest_common.sh@941 -- # uname 00:12:51.748 04:08:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:51.748 04:08:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79127 00:12:51.748 killing process with pid 79127 00:12:51.748 04:08:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:51.748 04:08:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:51.748 04:08:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79127' 00:12:51.748 04:08:53 -- common/autotest_common.sh@955 -- # kill 79127 00:12:51.748 04:08:53 -- common/autotest_common.sh@960 -- # wait 79127 00:12:52.007 04:08:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:52.007 04:08:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:52.007 04:08:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:52.007 04:08:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:52.007 04:08:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:52.007 04:08:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.007 04:08:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.007 04:08:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.007 04:08:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:52.007 00:12:52.007 real 0m4.323s 00:12:52.007 user 0m12.602s 00:12:52.007 sys 0m1.032s 00:12:52.007 ************************************ 00:12:52.007 END TEST nvmf_abort 00:12:52.007 ************************************ 00:12:52.007 04:08:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:52.007 04:08:53 -- common/autotest_common.sh@10 -- # set +x 00:12:52.007 04:08:53 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:52.007 04:08:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:52.007 04:08:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:52.007 04:08:53 -- common/autotest_common.sh@10 -- # set +x 00:12:52.007 ************************************ 00:12:52.007 START TEST nvmf_ns_hotplug_stress 00:12:52.007 ************************************ 00:12:52.007 04:08:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:52.007 * Looking for test storage... 00:12:52.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:52.007 04:08:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:52.007 04:08:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:52.007 04:08:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:52.007 04:08:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:52.007 04:08:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:52.007 04:08:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:52.007 04:08:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:52.007 04:08:53 -- scripts/common.sh@335 -- # IFS=.-: 00:12:52.007 04:08:53 -- scripts/common.sh@335 -- # read -ra ver1 00:12:52.007 04:08:53 -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.007 04:08:53 -- scripts/common.sh@336 -- # read -ra ver2 00:12:52.007 04:08:53 -- scripts/common.sh@337 -- # local 'op=<' 00:12:52.007 04:08:53 -- scripts/common.sh@339 -- # ver1_l=2 00:12:52.007 04:08:53 -- scripts/common.sh@340 -- # ver2_l=1 00:12:52.007 04:08:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:52.007 04:08:53 -- scripts/common.sh@343 -- # case "$op" in 00:12:52.007 04:08:53 -- scripts/common.sh@344 -- # : 1 00:12:52.007 04:08:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:52.007 04:08:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.007 04:08:53 -- scripts/common.sh@364 -- # decimal 1 00:12:52.266 04:08:53 -- scripts/common.sh@352 -- # local d=1 00:12:52.266 04:08:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.266 04:08:53 -- scripts/common.sh@354 -- # echo 1 00:12:52.266 04:08:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:52.266 04:08:53 -- scripts/common.sh@365 -- # decimal 2 00:12:52.266 04:08:53 -- scripts/common.sh@352 -- # local d=2 00:12:52.266 04:08:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.266 04:08:53 -- scripts/common.sh@354 -- # echo 2 00:12:52.266 04:08:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:52.266 04:08:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:52.266 04:08:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:52.266 04:08:53 -- scripts/common.sh@367 -- # return 0 00:12:52.266 04:08:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.266 04:08:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:52.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.266 --rc genhtml_branch_coverage=1 00:12:52.266 --rc genhtml_function_coverage=1 00:12:52.266 --rc genhtml_legend=1 00:12:52.266 --rc geninfo_all_blocks=1 00:12:52.266 --rc geninfo_unexecuted_blocks=1 00:12:52.266 00:12:52.266 ' 00:12:52.266 04:08:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:52.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.266 --rc genhtml_branch_coverage=1 00:12:52.266 --rc genhtml_function_coverage=1 00:12:52.266 --rc genhtml_legend=1 00:12:52.266 --rc geninfo_all_blocks=1 00:12:52.266 --rc geninfo_unexecuted_blocks=1 00:12:52.266 00:12:52.266 ' 00:12:52.267 04:08:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:52.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.267 --rc genhtml_branch_coverage=1 00:12:52.267 --rc genhtml_function_coverage=1 00:12:52.267 --rc genhtml_legend=1 00:12:52.267 --rc geninfo_all_blocks=1 00:12:52.267 --rc geninfo_unexecuted_blocks=1 00:12:52.267 00:12:52.267 ' 00:12:52.267 04:08:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:52.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.267 --rc genhtml_branch_coverage=1 00:12:52.267 --rc genhtml_function_coverage=1 00:12:52.267 --rc genhtml_legend=1 00:12:52.267 --rc geninfo_all_blocks=1 00:12:52.267 --rc geninfo_unexecuted_blocks=1 00:12:52.267 00:12:52.267 ' 00:12:52.267 04:08:53 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:52.267 04:08:53 -- nvmf/common.sh@7 -- # uname -s 00:12:52.267 04:08:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.267 04:08:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.267 04:08:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.267 04:08:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.267 04:08:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.267 04:08:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.267 04:08:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.267 04:08:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.267 04:08:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.267 04:08:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.267 04:08:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:12:52.267 04:08:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:12:52.267 04:08:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.267 04:08:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.267 04:08:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:52.267 04:08:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:52.267 04:08:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.267 04:08:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.267 04:08:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.267 04:08:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.267 04:08:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.267 04:08:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.267 04:08:53 -- paths/export.sh@5 -- # export PATH 00:12:52.267 04:08:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.267 04:08:53 -- nvmf/common.sh@46 -- # : 0 00:12:52.267 04:08:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:52.267 04:08:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:52.267 04:08:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:52.267 04:08:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.267 04:08:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.267 04:08:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:52.267 04:08:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:52.267 04:08:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:52.267 04:08:53 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:52.267 04:08:53 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:52.267 04:08:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:52.267 04:08:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.267 04:08:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:52.267 04:08:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:52.267 04:08:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:52.267 04:08:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.267 04:08:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.267 04:08:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.267 04:08:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:52.267 04:08:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:52.267 04:08:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:52.267 04:08:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:52.267 04:08:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:52.267 04:08:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:52.267 04:08:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.267 04:08:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.267 04:08:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:52.267 04:08:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:52.267 04:08:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:52.267 04:08:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:52.267 04:08:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:52.267 04:08:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.267 04:08:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:52.267 04:08:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:52.267 04:08:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:52.267 04:08:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:52.267 04:08:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:52.267 04:08:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:52.267 Cannot find device "nvmf_tgt_br" 00:12:52.267 04:08:53 -- nvmf/common.sh@154 -- # true 00:12:52.267 04:08:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:52.267 Cannot find device "nvmf_tgt_br2" 00:12:52.267 04:08:53 -- nvmf/common.sh@155 -- # true 00:12:52.267 04:08:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:52.267 04:08:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:52.267 Cannot find device "nvmf_tgt_br" 00:12:52.267 04:08:53 -- nvmf/common.sh@157 -- # true 00:12:52.267 04:08:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:52.267 Cannot find device "nvmf_tgt_br2" 00:12:52.267 04:08:53 -- nvmf/common.sh@158 -- # true 00:12:52.267 04:08:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:52.267 04:08:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:52.267 04:08:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:52.267 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:52.267 04:08:53 -- nvmf/common.sh@161 -- # true 00:12:52.267 04:08:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:52.267 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:52.267 04:08:53 -- nvmf/common.sh@162 -- # true 00:12:52.267 04:08:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:52.267 04:08:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:52.267 04:08:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:52.267 04:08:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:52.267 04:08:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:52.267 04:08:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:52.267 04:08:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:52.267 04:08:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:52.267 04:08:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:52.267 04:08:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:52.267 04:08:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:52.267 04:08:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:52.267 04:08:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:52.267 04:08:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:52.527 04:08:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:52.527 04:08:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:52.527 04:08:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:52.527 04:08:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:52.527 04:08:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:52.527 04:08:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:52.527 04:08:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:52.527 04:08:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:52.527 04:08:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:52.527 04:08:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:52.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:12:52.527 00:12:52.527 --- 10.0.0.2 ping statistics --- 00:12:52.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.527 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:12:52.527 04:08:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:52.527 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:52.527 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:12:52.527 00:12:52.527 --- 10.0.0.3 ping statistics --- 00:12:52.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.527 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:12:52.527 04:08:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:52.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:52.527 00:12:52.527 --- 10.0.0.1 ping statistics --- 00:12:52.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.527 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:52.527 04:08:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.527 04:08:54 -- nvmf/common.sh@421 -- # return 0 00:12:52.527 04:08:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:52.527 04:08:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.527 04:08:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:52.527 04:08:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:52.527 04:08:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.527 04:08:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:52.527 04:08:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:52.527 04:08:54 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:52.527 04:08:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:52.527 04:08:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:52.527 04:08:54 -- common/autotest_common.sh@10 -- # set +x 00:12:52.527 04:08:54 -- nvmf/common.sh@469 -- # nvmfpid=79400 00:12:52.527 04:08:54 -- nvmf/common.sh@470 -- # waitforlisten 79400 00:12:52.527 04:08:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:52.527 04:08:54 -- common/autotest_common.sh@829 -- # '[' -z 79400 ']' 00:12:52.527 04:08:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.527 04:08:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:52.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.527 04:08:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.527 04:08:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:52.527 04:08:54 -- common/autotest_common.sh@10 -- # set +x 00:12:52.527 [2024-11-26 04:08:54.219100] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:52.527 [2024-11-26 04:08:54.219182] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.786 [2024-11-26 04:08:54.354373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:52.786 [2024-11-26 04:08:54.417493] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:52.786 [2024-11-26 04:08:54.417621] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.786 [2024-11-26 04:08:54.417632] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.786 [2024-11-26 04:08:54.417640] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.786 [2024-11-26 04:08:54.417790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.786 [2024-11-26 04:08:54.417855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.786 [2024-11-26 04:08:54.417856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.723 04:08:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:53.723 04:08:55 -- common/autotest_common.sh@862 -- # return 0 00:12:53.723 04:08:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:53.723 04:08:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:53.723 04:08:55 -- common/autotest_common.sh@10 -- # set +x 00:12:53.723 04:08:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.723 04:08:55 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:53.723 04:08:55 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:53.981 [2024-11-26 04:08:55.561441] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.981 04:08:55 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:54.240 04:08:55 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.240 [2024-11-26 04:08:55.975800] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.240 04:08:55 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:54.808 04:08:56 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:54.808 Malloc0 00:12:54.808 04:08:56 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:55.067 Delay0 00:12:55.067 04:08:56 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.326 04:08:56 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:55.584 NULL1 00:12:55.584 04:08:57 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:55.843 04:08:57 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:55.843 04:08:57 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=79531 00:12:55.843 04:08:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:12:55.843 04:08:57 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.220 Read completed with error (sct=0, sc=11) 00:12:57.220 04:08:58 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.220 04:08:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:57.220 04:08:58 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:57.479 true 00:12:57.479 04:08:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:12:57.479 04:08:59 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.412 04:08:59 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:58.412 04:09:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:58.412 04:09:00 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:58.670 true 00:12:58.670 04:09:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:12:58.670 04:09:00 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.928 04:09:00 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.188 04:09:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:59.188 04:09:00 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:59.446 true 00:12:59.446 04:09:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:12:59.446 04:09:00 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.382 04:09:01 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.382 04:09:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:00.382 04:09:02 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:00.640 true 00:13:00.640 04:09:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:00.640 04:09:02 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.898 04:09:02 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.156 04:09:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:01.156 04:09:02 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:01.156 true 00:13:01.435 04:09:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:01.435 04:09:02 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.369 04:09:03 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.369 04:09:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:02.369 04:09:04 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:03.002 true 00:13:03.002 04:09:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:03.002 04:09:04 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.002 04:09:04 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.266 04:09:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:03.266 04:09:04 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:03.529 true 00:13:03.529 04:09:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:03.529 04:09:05 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.464 04:09:05 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.464 04:09:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:04.464 04:09:06 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:04.723 true 00:13:04.723 04:09:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:04.723 04:09:06 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.982 04:09:06 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.241 04:09:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:05.241 04:09:06 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:05.500 true 00:13:05.500 04:09:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:05.500 04:09:07 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.436 04:09:07 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:06.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.436 04:09:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:06.436 04:09:08 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:06.694 true 00:13:06.694 04:09:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:06.694 04:09:08 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.953 04:09:08 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.212 04:09:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:07.212 04:09:08 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:07.471 true 00:13:07.471 04:09:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:07.471 04:09:09 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.407 04:09:09 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.407 04:09:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:08.407 04:09:10 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:08.666 true 00:13:08.666 04:09:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:08.666 04:09:10 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.925 04:09:10 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.185 04:09:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:09.185 04:09:10 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:09.444 true 00:13:09.444 04:09:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:09.444 04:09:11 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.380 04:09:11 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:10.639 04:09:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:10.639 04:09:12 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:10.898 true 00:13:10.898 04:09:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:10.898 04:09:12 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.898 04:09:12 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.157 04:09:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:11.157 04:09:12 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:11.415 true 00:13:11.415 04:09:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:11.415 04:09:13 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.353 04:09:13 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:12.612 04:09:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:12.612 04:09:14 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:12.870 true 00:13:12.870 04:09:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:12.871 04:09:14 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.129 04:09:14 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.388 04:09:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:13.388 04:09:14 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:13.646 true 00:13:13.646 04:09:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:13.646 04:09:15 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.905 04:09:15 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:14.164 04:09:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:14.164 04:09:15 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:14.164 true 00:13:14.164 04:09:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:14.164 04:09:15 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.540 04:09:16 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.540 04:09:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:15.540 04:09:17 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:15.798 true 00:13:15.798 04:09:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:15.798 04:09:17 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.734 04:09:18 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.734 04:09:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:16.734 04:09:18 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:16.992 true 00:13:16.992 04:09:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:16.992 04:09:18 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.251 04:09:18 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.510 04:09:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:17.511 04:09:19 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:17.768 true 00:13:17.768 04:09:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:17.768 04:09:19 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.704 04:09:20 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.963 04:09:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:18.963 04:09:20 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:18.963 true 00:13:19.222 04:09:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:19.222 04:09:20 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.481 04:09:21 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.481 04:09:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:19.481 04:09:21 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:19.740 true 00:13:19.740 04:09:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:19.740 04:09:21 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.676 04:09:22 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.935 04:09:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:20.935 04:09:22 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:21.194 true 00:13:21.194 04:09:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:21.194 04:09:22 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.194 04:09:22 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.454 04:09:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:21.454 04:09:23 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:21.713 true 00:13:21.713 04:09:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:21.713 04:09:23 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.651 04:09:24 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.910 04:09:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:22.910 04:09:24 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:23.169 true 00:13:23.169 04:09:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:23.169 04:09:24 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.428 04:09:25 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.688 04:09:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:23.688 04:09:25 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:23.946 true 00:13:23.946 04:09:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:23.946 04:09:25 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.884 04:09:26 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:24.884 04:09:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:24.884 04:09:26 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:25.143 true 00:13:25.143 04:09:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:25.143 04:09:26 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.402 04:09:26 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.661 04:09:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:25.661 04:09:27 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:25.920 true 00:13:25.920 04:09:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:25.920 04:09:27 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.858 Initializing NVMe Controllers 00:13:26.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:26.858 Controller IO queue size 128, less than required. 00:13:26.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:26.858 Controller IO queue size 128, less than required. 00:13:26.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:26.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:26.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:26.858 Initialization complete. Launching workers. 00:13:26.858 ======================================================== 00:13:26.858 Latency(us) 00:13:26.858 Device Information : IOPS MiB/s Average min max 00:13:26.858 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 602.03 0.29 122614.58 2957.24 1081060.26 00:13:26.858 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14649.19 7.15 8737.74 2271.34 555973.40 00:13:26.858 ======================================================== 00:13:26.858 Total : 15251.22 7.45 13232.92 2271.34 1081060.26 00:13:26.858 00:13:26.858 04:09:28 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.858 04:09:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:26.858 04:09:28 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:27.118 true 00:13:27.118 04:09:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79531 00:13:27.118 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (79531) - No such process 00:13:27.118 04:09:28 -- target/ns_hotplug_stress.sh@53 -- # wait 79531 00:13:27.118 04:09:28 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.377 04:09:28 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:27.635 04:09:29 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:27.635 04:09:29 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:27.635 04:09:29 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:27.635 04:09:29 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:27.635 04:09:29 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:27.635 null0 00:13:27.635 04:09:29 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:27.635 04:09:29 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:27.635 04:09:29 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:27.893 null1 00:13:27.893 04:09:29 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:27.893 04:09:29 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:27.893 04:09:29 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:28.152 null2 00:13:28.152 04:09:29 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:28.152 04:09:29 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:28.152 04:09:29 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:28.411 null3 00:13:28.411 04:09:30 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:28.411 04:09:30 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:28.411 04:09:30 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:28.670 null4 00:13:28.670 04:09:30 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:28.670 04:09:30 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:28.670 04:09:30 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:28.929 null5 00:13:28.929 04:09:30 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:28.929 04:09:30 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:28.929 04:09:30 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:28.929 null6 00:13:28.929 04:09:30 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:28.929 04:09:30 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:28.929 04:09:30 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:29.188 null7 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.188 04:09:30 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@66 -- # wait 80586 80587 80590 80591 80593 80595 80598 80599 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.189 04:09:30 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:29.448 04:09:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:29.448 04:09:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:29.448 04:09:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:29.448 04:09:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:29.448 04:09:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:29.707 04:09:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:29.707 04:09:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:29.707 04:09:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.707 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:29.707 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.707 04:09:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:29.707 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:29.707 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.707 04:09:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:29.707 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:29.707 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.707 04:09:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:29.707 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:29.707 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.707 04:09:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:29.707 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:29.707 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.707 04:09:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:29.967 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:29.967 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:29.967 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.967 04:09:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:29.967 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.967 04:09:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:29.967 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:29.967 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.967 04:09:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:29.967 04:09:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:29.967 04:09:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:29.967 04:09:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:29.967 04:09:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:29.967 04:09:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:29.967 04:09:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:29.967 04:09:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:30.228 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.228 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.228 04:09:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:30.228 04:09:31 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.228 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.228 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.228 04:09:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:30.228 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.228 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.228 04:09:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:30.228 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.228 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.228 04:09:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:30.228 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.228 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.228 04:09:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:30.228 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.228 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.228 04:09:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:30.228 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.228 04:09:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.228 04:09:31 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:30.513 04:09:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:30.513 04:09:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:30.513 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.513 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.513 04:09:32 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:30.513 04:09:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:30.513 04:09:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:30.513 04:09:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:30.513 04:09:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:30.513 04:09:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:30.817 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.817 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.817 04:09:32 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:30.817 04:09:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.817 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.817 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.817 04:09:32 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:30.817 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.817 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.817 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.817 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.817 04:09:32 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:30.817 04:09:32 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:30.817 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.817 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.817 04:09:32 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:30.817 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.817 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.817 04:09:32 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:30.817 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.817 04:09:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:30.817 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.817 04:09:32 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:31.075 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.075 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.075 04:09:32 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:31.075 04:09:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:31.076 04:09:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:31.076 04:09:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:31.076 04:09:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:31.076 04:09:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:31.076 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.076 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.076 04:09:32 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:31.076 04:09:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.076 04:09:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:31.076 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.076 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.076 04:09:32 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:31.333 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.333 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.333 04:09:32 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:31.333 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.333 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.333 04:09:32 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:31.333 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.333 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.333 04:09:32 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:31.333 04:09:32 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:31.333 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.333 04:09:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.333 04:09:32 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:31.333 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.333 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.333 04:09:33 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:31.333 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.333 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.333 04:09:33 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:31.333 04:09:33 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:31.592 04:09:33 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:31.592 04:09:33 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:31.592 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.592 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.592 04:09:33 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:31.592 04:09:33 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:31.592 04:09:33 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:31.592 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.592 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.592 04:09:33 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:31.592 04:09:33 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.592 04:09:33 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:31.851 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.851 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.851 04:09:33 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:31.851 04:09:33 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:31.851 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.851 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.851 04:09:33 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:31.851 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.851 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.851 04:09:33 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:31.851 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.851 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.851 04:09:33 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:31.851 04:09:33 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:31.851 04:09:33 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:32.110 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.110 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.110 04:09:33 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:32.110 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.110 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.110 04:09:33 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:32.110 04:09:33 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:32.111 04:09:33 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:32.111 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.111 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.111 04:09:33 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:32.111 04:09:33 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:32.111 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.111 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.111 04:09:33 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:32.111 04:09:33 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.111 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.111 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.111 04:09:33 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:32.111 04:09:33 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:32.370 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.370 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.370 04:09:33 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:32.370 04:09:33 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:32.370 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.370 04:09:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.370 04:09:33 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:32.370 04:09:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:32.370 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.370 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.370 04:09:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:32.370 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.370 04:09:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:32.370 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.370 04:09:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:32.370 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.370 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.370 04:09:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:32.628 04:09:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:32.628 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.628 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.628 04:09:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:32.628 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.628 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.628 04:09:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:32.628 04:09:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:32.628 04:09:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.628 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.628 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.628 04:09:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:32.628 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.628 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.628 04:09:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:32.628 04:09:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:32.628 04:09:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:32.887 04:09:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:32.887 04:09:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:32.887 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.887 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.887 04:09:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:32.887 04:09:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:32.887 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.887 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.887 04:09:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:32.887 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.887 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.887 04:09:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:33.146 04:09:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:33.146 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.146 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.146 04:09:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:33.146 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.146 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.146 04:09:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:33.146 04:09:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:33.146 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.146 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.146 04:09:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:33.146 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.146 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.146 04:09:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:33.146 04:09:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:33.146 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.146 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.146 04:09:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:33.406 04:09:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.406 04:09:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:33.406 04:09:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:33.406 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.406 04:09:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.406 04:09:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:33.406 04:09:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:33.406 04:09:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:33.406 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.406 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.406 04:09:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:33.406 04:09:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:33.406 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.406 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.406 04:09:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:33.665 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.665 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.665 04:09:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:33.665 04:09:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:33.665 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.665 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.665 04:09:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:33.665 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.665 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.665 04:09:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:33.665 04:09:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:33.665 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.665 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.665 04:09:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:33.665 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.665 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.665 04:09:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:33.665 04:09:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.924 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.924 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.924 04:09:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:33.924 04:09:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:33.924 04:09:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:33.924 04:09:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:33.924 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.924 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.924 04:09:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:33.924 04:09:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:33.924 04:09:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:33.924 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.924 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.924 04:09:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:33.924 04:09:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:34.182 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.182 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.182 04:09:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:34.182 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.182 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.182 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.182 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.182 04:09:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:34.182 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.182 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.182 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.182 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.182 04:09:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.182 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.182 04:09:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.441 04:09:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:34.441 04:09:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.441 04:09:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.441 04:09:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.441 04:09:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.441 04:09:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.441 04:09:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.441 04:09:36 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:34.441 04:09:36 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:34.441 04:09:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:34.441 04:09:36 -- nvmf/common.sh@116 -- # sync 00:13:34.700 04:09:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:34.700 04:09:36 -- nvmf/common.sh@119 -- # set +e 00:13:34.700 04:09:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:34.700 04:09:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:34.700 rmmod nvme_tcp 00:13:34.700 rmmod nvme_fabrics 00:13:34.700 rmmod nvme_keyring 00:13:34.700 04:09:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:34.700 04:09:36 -- nvmf/common.sh@123 -- # set -e 00:13:34.700 04:09:36 -- nvmf/common.sh@124 -- # return 0 00:13:34.700 04:09:36 -- nvmf/common.sh@477 -- # '[' -n 79400 ']' 00:13:34.700 04:09:36 -- nvmf/common.sh@478 -- # killprocess 79400 00:13:34.700 04:09:36 -- common/autotest_common.sh@936 -- # '[' -z 79400 ']' 00:13:34.700 04:09:36 -- common/autotest_common.sh@940 -- # kill -0 79400 00:13:34.700 04:09:36 -- common/autotest_common.sh@941 -- # uname 00:13:34.700 04:09:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:34.700 04:09:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79400 00:13:34.700 killing process with pid 79400 00:13:34.700 04:09:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:34.700 04:09:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:34.700 04:09:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79400' 00:13:34.700 04:09:36 -- common/autotest_common.sh@955 -- # kill 79400 00:13:34.700 04:09:36 -- common/autotest_common.sh@960 -- # wait 79400 00:13:34.959 04:09:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:34.959 04:09:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:34.959 04:09:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:34.959 04:09:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:34.959 04:09:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:34.959 04:09:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.959 04:09:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.959 04:09:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.959 04:09:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:34.959 00:13:34.959 real 0m42.899s 00:13:34.959 user 3m23.304s 00:13:34.959 sys 0m11.836s 00:13:34.959 04:09:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:34.959 04:09:36 -- common/autotest_common.sh@10 -- # set +x 00:13:34.959 ************************************ 00:13:34.959 END TEST nvmf_ns_hotplug_stress 00:13:34.959 ************************************ 00:13:34.959 04:09:36 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:34.959 04:09:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:34.959 04:09:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:34.959 04:09:36 -- common/autotest_common.sh@10 -- # set +x 00:13:34.959 ************************************ 00:13:34.959 START TEST nvmf_connect_stress 00:13:34.959 ************************************ 00:13:34.959 04:09:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:34.959 * Looking for test storage... 00:13:34.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:34.959 04:09:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:34.959 04:09:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:34.959 04:09:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:34.959 04:09:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:34.959 04:09:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:34.959 04:09:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:34.959 04:09:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:34.959 04:09:36 -- scripts/common.sh@335 -- # IFS=.-: 00:13:34.959 04:09:36 -- scripts/common.sh@335 -- # read -ra ver1 00:13:34.959 04:09:36 -- scripts/common.sh@336 -- # IFS=.-: 00:13:34.959 04:09:36 -- scripts/common.sh@336 -- # read -ra ver2 00:13:34.959 04:09:36 -- scripts/common.sh@337 -- # local 'op=<' 00:13:34.959 04:09:36 -- scripts/common.sh@339 -- # ver1_l=2 00:13:34.959 04:09:36 -- scripts/common.sh@340 -- # ver2_l=1 00:13:34.959 04:09:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:34.959 04:09:36 -- scripts/common.sh@343 -- # case "$op" in 00:13:34.959 04:09:36 -- scripts/common.sh@344 -- # : 1 00:13:34.959 04:09:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:34.959 04:09:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:34.959 04:09:36 -- scripts/common.sh@364 -- # decimal 1 00:13:35.219 04:09:36 -- scripts/common.sh@352 -- # local d=1 00:13:35.219 04:09:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.219 04:09:36 -- scripts/common.sh@354 -- # echo 1 00:13:35.219 04:09:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:35.219 04:09:36 -- scripts/common.sh@365 -- # decimal 2 00:13:35.219 04:09:36 -- scripts/common.sh@352 -- # local d=2 00:13:35.219 04:09:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:35.219 04:09:36 -- scripts/common.sh@354 -- # echo 2 00:13:35.219 04:09:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:35.219 04:09:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:35.219 04:09:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:35.219 04:09:36 -- scripts/common.sh@367 -- # return 0 00:13:35.219 04:09:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:35.219 04:09:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:35.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.219 --rc genhtml_branch_coverage=1 00:13:35.219 --rc genhtml_function_coverage=1 00:13:35.219 --rc genhtml_legend=1 00:13:35.219 --rc geninfo_all_blocks=1 00:13:35.219 --rc geninfo_unexecuted_blocks=1 00:13:35.219 00:13:35.219 ' 00:13:35.219 04:09:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:35.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.219 --rc genhtml_branch_coverage=1 00:13:35.219 --rc genhtml_function_coverage=1 00:13:35.219 --rc genhtml_legend=1 00:13:35.219 --rc geninfo_all_blocks=1 00:13:35.219 --rc geninfo_unexecuted_blocks=1 00:13:35.219 00:13:35.219 ' 00:13:35.219 04:09:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:35.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.219 --rc genhtml_branch_coverage=1 00:13:35.219 --rc genhtml_function_coverage=1 00:13:35.219 --rc genhtml_legend=1 00:13:35.219 --rc geninfo_all_blocks=1 00:13:35.219 --rc geninfo_unexecuted_blocks=1 00:13:35.219 00:13:35.219 ' 00:13:35.219 04:09:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:35.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.219 --rc genhtml_branch_coverage=1 00:13:35.219 --rc genhtml_function_coverage=1 00:13:35.219 --rc genhtml_legend=1 00:13:35.219 --rc geninfo_all_blocks=1 00:13:35.219 --rc geninfo_unexecuted_blocks=1 00:13:35.219 00:13:35.219 ' 00:13:35.219 04:09:36 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:35.219 04:09:36 -- nvmf/common.sh@7 -- # uname -s 00:13:35.219 04:09:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.219 04:09:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.219 04:09:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.219 04:09:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.219 04:09:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.219 04:09:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.219 04:09:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.219 04:09:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.219 04:09:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.219 04:09:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.219 04:09:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:13:35.219 04:09:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:13:35.219 04:09:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.219 04:09:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.219 04:09:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:35.219 04:09:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:35.219 04:09:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.219 04:09:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.219 04:09:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.219 04:09:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.219 04:09:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.219 04:09:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.219 04:09:36 -- paths/export.sh@5 -- # export PATH 00:13:35.219 04:09:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.219 04:09:36 -- nvmf/common.sh@46 -- # : 0 00:13:35.219 04:09:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:35.219 04:09:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:35.219 04:09:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:35.219 04:09:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.219 04:09:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.219 04:09:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:35.219 04:09:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:35.219 04:09:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:35.219 04:09:36 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:35.219 04:09:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:35.219 04:09:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.219 04:09:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:35.219 04:09:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:35.219 04:09:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:35.219 04:09:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.219 04:09:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:35.219 04:09:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.219 04:09:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:35.219 04:09:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:35.219 04:09:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:35.219 04:09:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:35.219 04:09:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:35.219 04:09:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:35.219 04:09:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:35.219 04:09:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:35.219 04:09:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:35.219 04:09:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:35.219 04:09:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:35.219 04:09:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:35.219 04:09:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:35.219 04:09:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:35.219 04:09:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:35.219 04:09:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:35.219 04:09:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:35.219 04:09:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:35.219 04:09:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:35.219 04:09:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:35.219 Cannot find device "nvmf_tgt_br" 00:13:35.219 04:09:36 -- nvmf/common.sh@154 -- # true 00:13:35.219 04:09:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:35.219 Cannot find device "nvmf_tgt_br2" 00:13:35.219 04:09:36 -- nvmf/common.sh@155 -- # true 00:13:35.219 04:09:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:35.219 04:09:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:35.219 Cannot find device "nvmf_tgt_br" 00:13:35.219 04:09:36 -- nvmf/common.sh@157 -- # true 00:13:35.219 04:09:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:35.219 Cannot find device "nvmf_tgt_br2" 00:13:35.219 04:09:36 -- nvmf/common.sh@158 -- # true 00:13:35.219 04:09:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:35.219 04:09:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:35.219 04:09:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:35.219 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:35.219 04:09:36 -- nvmf/common.sh@161 -- # true 00:13:35.219 04:09:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:35.220 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:35.220 04:09:36 -- nvmf/common.sh@162 -- # true 00:13:35.220 04:09:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:35.220 04:09:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:35.220 04:09:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:35.220 04:09:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:35.220 04:09:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:35.220 04:09:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:35.220 04:09:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:35.220 04:09:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:35.220 04:09:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:35.220 04:09:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:35.220 04:09:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:35.478 04:09:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:35.478 04:09:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:35.478 04:09:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:35.479 04:09:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:35.479 04:09:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:35.479 04:09:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:35.479 04:09:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:35.479 04:09:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:35.479 04:09:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:35.479 04:09:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:35.479 04:09:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:35.479 04:09:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:35.479 04:09:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:35.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:35.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:13:35.479 00:13:35.479 --- 10.0.0.2 ping statistics --- 00:13:35.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.479 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:13:35.479 04:09:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:35.479 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:35.479 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:13:35.479 00:13:35.479 --- 10.0.0.3 ping statistics --- 00:13:35.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.479 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:13:35.479 04:09:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:35.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:35.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:13:35.479 00:13:35.479 --- 10.0.0.1 ping statistics --- 00:13:35.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.479 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:13:35.479 04:09:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:35.479 04:09:37 -- nvmf/common.sh@421 -- # return 0 00:13:35.479 04:09:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:35.479 04:09:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:35.479 04:09:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:35.479 04:09:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:35.479 04:09:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:35.479 04:09:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:35.479 04:09:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:35.479 04:09:37 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:35.479 04:09:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:35.479 04:09:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:35.479 04:09:37 -- common/autotest_common.sh@10 -- # set +x 00:13:35.479 04:09:37 -- nvmf/common.sh@469 -- # nvmfpid=81925 00:13:35.479 04:09:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:35.479 04:09:37 -- nvmf/common.sh@470 -- # waitforlisten 81925 00:13:35.479 04:09:37 -- common/autotest_common.sh@829 -- # '[' -z 81925 ']' 00:13:35.479 04:09:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.479 04:09:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:35.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.479 04:09:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.479 04:09:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:35.479 04:09:37 -- common/autotest_common.sh@10 -- # set +x 00:13:35.479 [2024-11-26 04:09:37.164811] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:35.479 [2024-11-26 04:09:37.164870] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.737 [2024-11-26 04:09:37.293122] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:35.737 [2024-11-26 04:09:37.350906] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:35.737 [2024-11-26 04:09:37.351039] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.737 [2024-11-26 04:09:37.351050] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.737 [2024-11-26 04:09:37.351057] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.737 [2024-11-26 04:09:37.351229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.737 [2024-11-26 04:09:37.352051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.737 [2024-11-26 04:09:37.352096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.672 04:09:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:36.672 04:09:38 -- common/autotest_common.sh@862 -- # return 0 00:13:36.672 04:09:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:36.672 04:09:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:36.672 04:09:38 -- common/autotest_common.sh@10 -- # set +x 00:13:36.672 04:09:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.672 04:09:38 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:36.672 04:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.672 04:09:38 -- common/autotest_common.sh@10 -- # set +x 00:13:36.672 [2024-11-26 04:09:38.143391] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.672 04:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.672 04:09:38 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:36.672 04:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.672 04:09:38 -- common/autotest_common.sh@10 -- # set +x 00:13:36.672 04:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.672 04:09:38 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.672 04:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.672 04:09:38 -- common/autotest_common.sh@10 -- # set +x 00:13:36.672 [2024-11-26 04:09:38.160689] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.672 04:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.672 04:09:38 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:36.672 04:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.672 04:09:38 -- common/autotest_common.sh@10 -- # set +x 00:13:36.672 NULL1 00:13:36.672 04:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.672 04:09:38 -- target/connect_stress.sh@21 -- # PERF_PID=81977 00:13:36.672 04:09:38 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:36.672 04:09:38 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:36.672 04:09:38 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:36.672 04:09:38 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:36.672 04:09:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.672 04:09:38 -- target/connect_stress.sh@28 -- # cat 00:13:36.672 04:09:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.672 04:09:38 -- target/connect_stress.sh@28 -- # cat 00:13:36.672 04:09:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.672 04:09:38 -- target/connect_stress.sh@28 -- # cat 00:13:36.672 04:09:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.672 04:09:38 -- target/connect_stress.sh@28 -- # cat 00:13:36.672 04:09:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.672 04:09:38 -- target/connect_stress.sh@28 -- # cat 00:13:36.672 04:09:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.672 04:09:38 -- target/connect_stress.sh@28 -- # cat 00:13:36.672 04:09:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.672 04:09:38 -- target/connect_stress.sh@28 -- # cat 00:13:36.672 04:09:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.672 04:09:38 -- target/connect_stress.sh@28 -- # cat 00:13:36.672 04:09:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.672 04:09:38 -- target/connect_stress.sh@28 -- # cat 00:13:36.672 04:09:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.672 04:09:38 -- target/connect_stress.sh@28 -- # cat 00:13:36.672 04:09:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.672 04:09:38 -- target/connect_stress.sh@28 -- # cat 00:13:36.672 04:09:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.672 04:09:38 -- target/connect_stress.sh@28 -- # cat 00:13:36.672 04:09:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.672 04:09:38 -- target/connect_stress.sh@28 -- # cat 00:13:36.672 04:09:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.672 04:09:38 -- target/connect_stress.sh@28 -- # cat 00:13:36.672 04:09:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.672 04:09:38 -- target/connect_stress.sh@28 -- # cat 00:13:36.672 04:09:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.672 04:09:38 -- target/connect_stress.sh@28 -- # cat 00:13:36.672 04:09:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.672 04:09:38 -- target/connect_stress.sh@28 -- # cat 00:13:36.672 04:09:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.672 04:09:38 -- target/connect_stress.sh@28 -- # cat 00:13:36.672 04:09:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.672 04:09:38 -- target/connect_stress.sh@28 -- # cat 00:13:36.672 04:09:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.672 04:09:38 -- target/connect_stress.sh@28 -- # cat 00:13:36.672 04:09:38 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:36.672 04:09:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.672 04:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.672 04:09:38 -- common/autotest_common.sh@10 -- # set +x 00:13:36.931 04:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.931 04:09:38 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:36.931 04:09:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.931 04:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.931 04:09:38 -- common/autotest_common.sh@10 -- # set +x 00:13:37.190 04:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.190 04:09:38 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:37.190 04:09:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.190 04:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.190 04:09:38 -- common/autotest_common.sh@10 -- # set +x 00:13:37.757 04:09:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.757 04:09:39 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:37.757 04:09:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.757 04:09:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.757 04:09:39 -- common/autotest_common.sh@10 -- # set +x 00:13:38.016 04:09:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.016 04:09:39 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:38.016 04:09:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.016 04:09:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.016 04:09:39 -- common/autotest_common.sh@10 -- # set +x 00:13:38.274 04:09:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.274 04:09:39 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:38.274 04:09:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.274 04:09:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.274 04:09:39 -- common/autotest_common.sh@10 -- # set +x 00:13:38.533 04:09:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.533 04:09:40 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:38.533 04:09:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.533 04:09:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.533 04:09:40 -- common/autotest_common.sh@10 -- # set +x 00:13:38.791 04:09:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.791 04:09:40 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:38.791 04:09:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.791 04:09:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.791 04:09:40 -- common/autotest_common.sh@10 -- # set +x 00:13:39.358 04:09:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.358 04:09:40 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:39.358 04:09:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.358 04:09:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.358 04:09:40 -- common/autotest_common.sh@10 -- # set +x 00:13:39.616 04:09:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.616 04:09:41 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:39.616 04:09:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.616 04:09:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.616 04:09:41 -- common/autotest_common.sh@10 -- # set +x 00:13:39.875 04:09:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.875 04:09:41 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:39.875 04:09:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.875 04:09:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.875 04:09:41 -- common/autotest_common.sh@10 -- # set +x 00:13:40.134 04:09:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.134 04:09:41 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:40.134 04:09:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.134 04:09:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.134 04:09:41 -- common/autotest_common.sh@10 -- # set +x 00:13:40.393 04:09:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.393 04:09:42 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:40.393 04:09:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.393 04:09:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.393 04:09:42 -- common/autotest_common.sh@10 -- # set +x 00:13:40.961 04:09:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.961 04:09:42 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:40.961 04:09:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.961 04:09:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.961 04:09:42 -- common/autotest_common.sh@10 -- # set +x 00:13:41.220 04:09:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.220 04:09:42 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:41.220 04:09:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.220 04:09:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.220 04:09:42 -- common/autotest_common.sh@10 -- # set +x 00:13:41.478 04:09:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.478 04:09:43 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:41.478 04:09:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.478 04:09:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.478 04:09:43 -- common/autotest_common.sh@10 -- # set +x 00:13:41.737 04:09:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.737 04:09:43 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:41.737 04:09:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.737 04:09:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.737 04:09:43 -- common/autotest_common.sh@10 -- # set +x 00:13:41.995 04:09:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.995 04:09:43 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:41.995 04:09:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.995 04:09:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.995 04:09:43 -- common/autotest_common.sh@10 -- # set +x 00:13:42.562 04:09:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.562 04:09:44 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:42.562 04:09:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.562 04:09:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.562 04:09:44 -- common/autotest_common.sh@10 -- # set +x 00:13:42.821 04:09:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.821 04:09:44 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:42.821 04:09:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.821 04:09:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.821 04:09:44 -- common/autotest_common.sh@10 -- # set +x 00:13:43.081 04:09:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.081 04:09:44 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:43.081 04:09:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.081 04:09:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.081 04:09:44 -- common/autotest_common.sh@10 -- # set +x 00:13:43.340 04:09:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.340 04:09:45 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:43.340 04:09:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.340 04:09:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.340 04:09:45 -- common/autotest_common.sh@10 -- # set +x 00:13:43.598 04:09:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.598 04:09:45 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:43.598 04:09:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.598 04:09:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.598 04:09:45 -- common/autotest_common.sh@10 -- # set +x 00:13:44.164 04:09:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.164 04:09:45 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:44.164 04:09:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.164 04:09:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.164 04:09:45 -- common/autotest_common.sh@10 -- # set +x 00:13:44.423 04:09:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.423 04:09:45 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:44.423 04:09:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.423 04:09:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.423 04:09:45 -- common/autotest_common.sh@10 -- # set +x 00:13:44.681 04:09:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.681 04:09:46 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:44.681 04:09:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.681 04:09:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.681 04:09:46 -- common/autotest_common.sh@10 -- # set +x 00:13:44.940 04:09:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.941 04:09:46 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:44.941 04:09:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.941 04:09:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.941 04:09:46 -- common/autotest_common.sh@10 -- # set +x 00:13:45.199 04:09:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.199 04:09:46 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:45.199 04:09:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.199 04:09:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.199 04:09:46 -- common/autotest_common.sh@10 -- # set +x 00:13:45.766 04:09:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.766 04:09:47 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:45.766 04:09:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.766 04:09:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.766 04:09:47 -- common/autotest_common.sh@10 -- # set +x 00:13:46.025 04:09:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.025 04:09:47 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:46.025 04:09:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.025 04:09:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.025 04:09:47 -- common/autotest_common.sh@10 -- # set +x 00:13:46.284 04:09:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.284 04:09:47 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:46.284 04:09:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.284 04:09:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.284 04:09:47 -- common/autotest_common.sh@10 -- # set +x 00:13:46.543 04:09:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.543 04:09:48 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:46.543 04:09:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.543 04:09:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.543 04:09:48 -- common/autotest_common.sh@10 -- # set +x 00:13:46.801 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:46.801 04:09:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.801 04:09:48 -- target/connect_stress.sh@34 -- # kill -0 81977 00:13:46.801 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (81977) - No such process 00:13:46.801 04:09:48 -- target/connect_stress.sh@38 -- # wait 81977 00:13:46.801 04:09:48 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:47.060 04:09:48 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:47.060 04:09:48 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:47.060 04:09:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:47.060 04:09:48 -- nvmf/common.sh@116 -- # sync 00:13:47.060 04:09:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:47.060 04:09:48 -- nvmf/common.sh@119 -- # set +e 00:13:47.060 04:09:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:47.060 04:09:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:47.060 rmmod nvme_tcp 00:13:47.060 rmmod nvme_fabrics 00:13:47.060 rmmod nvme_keyring 00:13:47.060 04:09:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:47.060 04:09:48 -- nvmf/common.sh@123 -- # set -e 00:13:47.060 04:09:48 -- nvmf/common.sh@124 -- # return 0 00:13:47.060 04:09:48 -- nvmf/common.sh@477 -- # '[' -n 81925 ']' 00:13:47.060 04:09:48 -- nvmf/common.sh@478 -- # killprocess 81925 00:13:47.060 04:09:48 -- common/autotest_common.sh@936 -- # '[' -z 81925 ']' 00:13:47.060 04:09:48 -- common/autotest_common.sh@940 -- # kill -0 81925 00:13:47.060 04:09:48 -- common/autotest_common.sh@941 -- # uname 00:13:47.060 04:09:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:47.060 04:09:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81925 00:13:47.060 04:09:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:47.060 killing process with pid 81925 00:13:47.060 04:09:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:47.060 04:09:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81925' 00:13:47.060 04:09:48 -- common/autotest_common.sh@955 -- # kill 81925 00:13:47.060 04:09:48 -- common/autotest_common.sh@960 -- # wait 81925 00:13:47.319 04:09:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:47.319 04:09:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:47.319 04:09:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:47.319 04:09:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:47.319 04:09:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:47.319 04:09:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.319 04:09:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.319 04:09:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.319 04:09:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:47.319 00:13:47.319 real 0m12.351s 00:13:47.319 user 0m41.430s 00:13:47.319 sys 0m2.953s 00:13:47.319 04:09:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:47.319 04:09:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.319 ************************************ 00:13:47.319 END TEST nvmf_connect_stress 00:13:47.319 ************************************ 00:13:47.319 04:09:48 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:47.319 04:09:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:47.319 04:09:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:47.319 04:09:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.319 ************************************ 00:13:47.319 START TEST nvmf_fused_ordering 00:13:47.319 ************************************ 00:13:47.319 04:09:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:47.319 * Looking for test storage... 00:13:47.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:47.319 04:09:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:47.319 04:09:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:47.319 04:09:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:47.578 04:09:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:47.578 04:09:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:47.578 04:09:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:47.578 04:09:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:47.578 04:09:49 -- scripts/common.sh@335 -- # IFS=.-: 00:13:47.578 04:09:49 -- scripts/common.sh@335 -- # read -ra ver1 00:13:47.578 04:09:49 -- scripts/common.sh@336 -- # IFS=.-: 00:13:47.578 04:09:49 -- scripts/common.sh@336 -- # read -ra ver2 00:13:47.578 04:09:49 -- scripts/common.sh@337 -- # local 'op=<' 00:13:47.578 04:09:49 -- scripts/common.sh@339 -- # ver1_l=2 00:13:47.578 04:09:49 -- scripts/common.sh@340 -- # ver2_l=1 00:13:47.578 04:09:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:47.578 04:09:49 -- scripts/common.sh@343 -- # case "$op" in 00:13:47.578 04:09:49 -- scripts/common.sh@344 -- # : 1 00:13:47.578 04:09:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:47.578 04:09:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:47.578 04:09:49 -- scripts/common.sh@364 -- # decimal 1 00:13:47.578 04:09:49 -- scripts/common.sh@352 -- # local d=1 00:13:47.578 04:09:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:47.578 04:09:49 -- scripts/common.sh@354 -- # echo 1 00:13:47.578 04:09:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:47.578 04:09:49 -- scripts/common.sh@365 -- # decimal 2 00:13:47.578 04:09:49 -- scripts/common.sh@352 -- # local d=2 00:13:47.578 04:09:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:47.578 04:09:49 -- scripts/common.sh@354 -- # echo 2 00:13:47.578 04:09:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:47.578 04:09:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:47.578 04:09:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:47.578 04:09:49 -- scripts/common.sh@367 -- # return 0 00:13:47.578 04:09:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:47.578 04:09:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:47.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.578 --rc genhtml_branch_coverage=1 00:13:47.578 --rc genhtml_function_coverage=1 00:13:47.578 --rc genhtml_legend=1 00:13:47.578 --rc geninfo_all_blocks=1 00:13:47.578 --rc geninfo_unexecuted_blocks=1 00:13:47.578 00:13:47.578 ' 00:13:47.578 04:09:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:47.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.578 --rc genhtml_branch_coverage=1 00:13:47.578 --rc genhtml_function_coverage=1 00:13:47.578 --rc genhtml_legend=1 00:13:47.578 --rc geninfo_all_blocks=1 00:13:47.578 --rc geninfo_unexecuted_blocks=1 00:13:47.578 00:13:47.578 ' 00:13:47.578 04:09:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:47.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.578 --rc genhtml_branch_coverage=1 00:13:47.578 --rc genhtml_function_coverage=1 00:13:47.578 --rc genhtml_legend=1 00:13:47.578 --rc geninfo_all_blocks=1 00:13:47.578 --rc geninfo_unexecuted_blocks=1 00:13:47.578 00:13:47.578 ' 00:13:47.578 04:09:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:47.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.578 --rc genhtml_branch_coverage=1 00:13:47.578 --rc genhtml_function_coverage=1 00:13:47.578 --rc genhtml_legend=1 00:13:47.578 --rc geninfo_all_blocks=1 00:13:47.578 --rc geninfo_unexecuted_blocks=1 00:13:47.578 00:13:47.578 ' 00:13:47.578 04:09:49 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:47.578 04:09:49 -- nvmf/common.sh@7 -- # uname -s 00:13:47.579 04:09:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.579 04:09:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.579 04:09:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.579 04:09:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.579 04:09:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.579 04:09:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.579 04:09:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.579 04:09:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.579 04:09:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.579 04:09:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.579 04:09:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:13:47.579 04:09:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:13:47.579 04:09:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.579 04:09:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.579 04:09:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:47.579 04:09:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:47.579 04:09:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.579 04:09:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.579 04:09:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.579 04:09:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.579 04:09:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.579 04:09:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.579 04:09:49 -- paths/export.sh@5 -- # export PATH 00:13:47.579 04:09:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.579 04:09:49 -- nvmf/common.sh@46 -- # : 0 00:13:47.579 04:09:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:47.579 04:09:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:47.579 04:09:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:47.579 04:09:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.579 04:09:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.579 04:09:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:47.579 04:09:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:47.579 04:09:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:47.579 04:09:49 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:47.579 04:09:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:47.579 04:09:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:47.579 04:09:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:47.579 04:09:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:47.579 04:09:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:47.579 04:09:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.579 04:09:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.579 04:09:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.579 04:09:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:47.579 04:09:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:47.579 04:09:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:47.579 04:09:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:47.579 04:09:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:47.579 04:09:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:47.579 04:09:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.579 04:09:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.579 04:09:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:47.579 04:09:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:47.579 04:09:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:47.579 04:09:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:47.579 04:09:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:47.579 04:09:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.579 04:09:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:47.579 04:09:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:47.579 04:09:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:47.579 04:09:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:47.579 04:09:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:47.579 04:09:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:47.579 Cannot find device "nvmf_tgt_br" 00:13:47.579 04:09:49 -- nvmf/common.sh@154 -- # true 00:13:47.579 04:09:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:47.579 Cannot find device "nvmf_tgt_br2" 00:13:47.579 04:09:49 -- nvmf/common.sh@155 -- # true 00:13:47.579 04:09:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:47.579 04:09:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:47.579 Cannot find device "nvmf_tgt_br" 00:13:47.579 04:09:49 -- nvmf/common.sh@157 -- # true 00:13:47.579 04:09:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:47.579 Cannot find device "nvmf_tgt_br2" 00:13:47.579 04:09:49 -- nvmf/common.sh@158 -- # true 00:13:47.579 04:09:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:47.579 04:09:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:47.579 04:09:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:47.579 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:47.579 04:09:49 -- nvmf/common.sh@161 -- # true 00:13:47.579 04:09:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:47.579 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:47.579 04:09:49 -- nvmf/common.sh@162 -- # true 00:13:47.579 04:09:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:47.579 04:09:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:47.579 04:09:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:47.579 04:09:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:47.579 04:09:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:47.579 04:09:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:47.579 04:09:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:47.837 04:09:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:47.837 04:09:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:47.837 04:09:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:47.837 04:09:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:47.837 04:09:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:47.837 04:09:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:47.837 04:09:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:47.837 04:09:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:47.837 04:09:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:47.837 04:09:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:47.837 04:09:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:47.837 04:09:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:47.837 04:09:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:47.837 04:09:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:47.837 04:09:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:47.837 04:09:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:47.837 04:09:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:47.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:13:47.837 00:13:47.837 --- 10.0.0.2 ping statistics --- 00:13:47.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.837 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:13:47.837 04:09:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:47.837 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:47.837 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:13:47.837 00:13:47.837 --- 10.0.0.3 ping statistics --- 00:13:47.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.837 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:47.837 04:09:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:47.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:13:47.837 00:13:47.837 --- 10.0.0.1 ping statistics --- 00:13:47.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.837 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:13:47.837 04:09:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.837 04:09:49 -- nvmf/common.sh@421 -- # return 0 00:13:47.837 04:09:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:47.837 04:09:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.837 04:09:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:47.837 04:09:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:47.837 04:09:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.837 04:09:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:47.837 04:09:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:47.837 04:09:49 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:47.837 04:09:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:47.837 04:09:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:47.837 04:09:49 -- common/autotest_common.sh@10 -- # set +x 00:13:47.837 04:09:49 -- nvmf/common.sh@469 -- # nvmfpid=82306 00:13:47.837 04:09:49 -- nvmf/common.sh@470 -- # waitforlisten 82306 00:13:47.837 04:09:49 -- common/autotest_common.sh@829 -- # '[' -z 82306 ']' 00:13:47.837 04:09:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.837 04:09:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:47.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.837 04:09:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:47.837 04:09:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.837 04:09:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:47.837 04:09:49 -- common/autotest_common.sh@10 -- # set +x 00:13:47.837 [2024-11-26 04:09:49.542871] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:47.837 [2024-11-26 04:09:49.542957] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.095 [2024-11-26 04:09:49.674817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.095 [2024-11-26 04:09:49.729609] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:48.095 [2024-11-26 04:09:49.729771] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.095 [2024-11-26 04:09:49.729786] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.095 [2024-11-26 04:09:49.729794] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.095 [2024-11-26 04:09:49.729819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.663 04:09:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:48.663 04:09:50 -- common/autotest_common.sh@862 -- # return 0 00:13:48.663 04:09:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:48.663 04:09:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:48.663 04:09:50 -- common/autotest_common.sh@10 -- # set +x 00:13:48.922 04:09:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.922 04:09:50 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:48.922 04:09:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.922 04:09:50 -- common/autotest_common.sh@10 -- # set +x 00:13:48.922 [2024-11-26 04:09:50.466533] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.922 04:09:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.922 04:09:50 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:48.922 04:09:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.922 04:09:50 -- common/autotest_common.sh@10 -- # set +x 00:13:48.922 04:09:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.922 04:09:50 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.922 04:09:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.922 04:09:50 -- common/autotest_common.sh@10 -- # set +x 00:13:48.922 [2024-11-26 04:09:50.483068] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.922 04:09:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.922 04:09:50 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:48.922 04:09:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.922 04:09:50 -- common/autotest_common.sh@10 -- # set +x 00:13:48.922 NULL1 00:13:48.922 04:09:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.922 04:09:50 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:48.922 04:09:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.922 04:09:50 -- common/autotest_common.sh@10 -- # set +x 00:13:48.922 04:09:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.922 04:09:50 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:48.922 04:09:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.922 04:09:50 -- common/autotest_common.sh@10 -- # set +x 00:13:48.922 04:09:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.922 04:09:50 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:48.922 [2024-11-26 04:09:50.533552] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:48.922 [2024-11-26 04:09:50.533601] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82356 ] 00:13:49.489 Attached to nqn.2016-06.io.spdk:cnode1 00:13:49.489 Namespace ID: 1 size: 1GB 00:13:49.489 fused_ordering(0) 00:13:49.489 fused_ordering(1) 00:13:49.489 fused_ordering(2) 00:13:49.489 fused_ordering(3) 00:13:49.489 fused_ordering(4) 00:13:49.489 fused_ordering(5) 00:13:49.489 fused_ordering(6) 00:13:49.489 fused_ordering(7) 00:13:49.489 fused_ordering(8) 00:13:49.489 fused_ordering(9) 00:13:49.489 fused_ordering(10) 00:13:49.489 fused_ordering(11) 00:13:49.489 fused_ordering(12) 00:13:49.489 fused_ordering(13) 00:13:49.489 fused_ordering(14) 00:13:49.489 fused_ordering(15) 00:13:49.489 fused_ordering(16) 00:13:49.489 fused_ordering(17) 00:13:49.489 fused_ordering(18) 00:13:49.489 fused_ordering(19) 00:13:49.489 fused_ordering(20) 00:13:49.489 fused_ordering(21) 00:13:49.489 fused_ordering(22) 00:13:49.489 fused_ordering(23) 00:13:49.489 fused_ordering(24) 00:13:49.489 fused_ordering(25) 00:13:49.489 fused_ordering(26) 00:13:49.489 fused_ordering(27) 00:13:49.489 fused_ordering(28) 00:13:49.489 fused_ordering(29) 00:13:49.489 fused_ordering(30) 00:13:49.489 fused_ordering(31) 00:13:49.489 fused_ordering(32) 00:13:49.489 fused_ordering(33) 00:13:49.489 fused_ordering(34) 00:13:49.489 fused_ordering(35) 00:13:49.489 fused_ordering(36) 00:13:49.489 fused_ordering(37) 00:13:49.489 fused_ordering(38) 00:13:49.489 fused_ordering(39) 00:13:49.489 fused_ordering(40) 00:13:49.489 fused_ordering(41) 00:13:49.489 fused_ordering(42) 00:13:49.489 fused_ordering(43) 00:13:49.489 fused_ordering(44) 00:13:49.489 fused_ordering(45) 00:13:49.489 fused_ordering(46) 00:13:49.489 fused_ordering(47) 00:13:49.489 fused_ordering(48) 00:13:49.489 fused_ordering(49) 00:13:49.489 fused_ordering(50) 00:13:49.489 fused_ordering(51) 00:13:49.489 fused_ordering(52) 00:13:49.489 fused_ordering(53) 00:13:49.489 fused_ordering(54) 00:13:49.489 fused_ordering(55) 00:13:49.489 fused_ordering(56) 00:13:49.489 fused_ordering(57) 00:13:49.489 fused_ordering(58) 00:13:49.489 fused_ordering(59) 00:13:49.489 fused_ordering(60) 00:13:49.489 fused_ordering(61) 00:13:49.489 fused_ordering(62) 00:13:49.489 fused_ordering(63) 00:13:49.489 fused_ordering(64) 00:13:49.489 fused_ordering(65) 00:13:49.489 fused_ordering(66) 00:13:49.489 fused_ordering(67) 00:13:49.489 fused_ordering(68) 00:13:49.489 fused_ordering(69) 00:13:49.489 fused_ordering(70) 00:13:49.489 fused_ordering(71) 00:13:49.489 fused_ordering(72) 00:13:49.489 fused_ordering(73) 00:13:49.489 fused_ordering(74) 00:13:49.489 fused_ordering(75) 00:13:49.489 fused_ordering(76) 00:13:49.489 fused_ordering(77) 00:13:49.489 fused_ordering(78) 00:13:49.489 fused_ordering(79) 00:13:49.489 fused_ordering(80) 00:13:49.489 fused_ordering(81) 00:13:49.489 fused_ordering(82) 00:13:49.489 fused_ordering(83) 00:13:49.489 fused_ordering(84) 00:13:49.489 fused_ordering(85) 00:13:49.489 fused_ordering(86) 00:13:49.489 fused_ordering(87) 00:13:49.489 fused_ordering(88) 00:13:49.489 fused_ordering(89) 00:13:49.489 fused_ordering(90) 00:13:49.489 fused_ordering(91) 00:13:49.489 fused_ordering(92) 00:13:49.489 fused_ordering(93) 00:13:49.489 fused_ordering(94) 00:13:49.489 fused_ordering(95) 00:13:49.489 fused_ordering(96) 00:13:49.489 fused_ordering(97) 00:13:49.489 fused_ordering(98) 00:13:49.489 fused_ordering(99) 00:13:49.489 fused_ordering(100) 00:13:49.489 fused_ordering(101) 00:13:49.489 fused_ordering(102) 00:13:49.489 fused_ordering(103) 00:13:49.489 fused_ordering(104) 00:13:49.489 fused_ordering(105) 00:13:49.489 fused_ordering(106) 00:13:49.489 fused_ordering(107) 00:13:49.489 fused_ordering(108) 00:13:49.489 fused_ordering(109) 00:13:49.489 fused_ordering(110) 00:13:49.489 fused_ordering(111) 00:13:49.489 fused_ordering(112) 00:13:49.489 fused_ordering(113) 00:13:49.489 fused_ordering(114) 00:13:49.489 fused_ordering(115) 00:13:49.489 fused_ordering(116) 00:13:49.489 fused_ordering(117) 00:13:49.489 fused_ordering(118) 00:13:49.489 fused_ordering(119) 00:13:49.489 fused_ordering(120) 00:13:49.489 fused_ordering(121) 00:13:49.489 fused_ordering(122) 00:13:49.489 fused_ordering(123) 00:13:49.489 fused_ordering(124) 00:13:49.489 fused_ordering(125) 00:13:49.489 fused_ordering(126) 00:13:49.489 fused_ordering(127) 00:13:49.489 fused_ordering(128) 00:13:49.489 fused_ordering(129) 00:13:49.489 fused_ordering(130) 00:13:49.489 fused_ordering(131) 00:13:49.489 fused_ordering(132) 00:13:49.489 fused_ordering(133) 00:13:49.489 fused_ordering(134) 00:13:49.489 fused_ordering(135) 00:13:49.489 fused_ordering(136) 00:13:49.489 fused_ordering(137) 00:13:49.489 fused_ordering(138) 00:13:49.489 fused_ordering(139) 00:13:49.489 fused_ordering(140) 00:13:49.489 fused_ordering(141) 00:13:49.489 fused_ordering(142) 00:13:49.489 fused_ordering(143) 00:13:49.489 fused_ordering(144) 00:13:49.490 fused_ordering(145) 00:13:49.490 fused_ordering(146) 00:13:49.490 fused_ordering(147) 00:13:49.490 fused_ordering(148) 00:13:49.490 fused_ordering(149) 00:13:49.490 fused_ordering(150) 00:13:49.490 fused_ordering(151) 00:13:49.490 fused_ordering(152) 00:13:49.490 fused_ordering(153) 00:13:49.490 fused_ordering(154) 00:13:49.490 fused_ordering(155) 00:13:49.490 fused_ordering(156) 00:13:49.490 fused_ordering(157) 00:13:49.490 fused_ordering(158) 00:13:49.490 fused_ordering(159) 00:13:49.490 fused_ordering(160) 00:13:49.490 fused_ordering(161) 00:13:49.490 fused_ordering(162) 00:13:49.490 fused_ordering(163) 00:13:49.490 fused_ordering(164) 00:13:49.490 fused_ordering(165) 00:13:49.490 fused_ordering(166) 00:13:49.490 fused_ordering(167) 00:13:49.490 fused_ordering(168) 00:13:49.490 fused_ordering(169) 00:13:49.490 fused_ordering(170) 00:13:49.490 fused_ordering(171) 00:13:49.490 fused_ordering(172) 00:13:49.490 fused_ordering(173) 00:13:49.490 fused_ordering(174) 00:13:49.490 fused_ordering(175) 00:13:49.490 fused_ordering(176) 00:13:49.490 fused_ordering(177) 00:13:49.490 fused_ordering(178) 00:13:49.490 fused_ordering(179) 00:13:49.490 fused_ordering(180) 00:13:49.490 fused_ordering(181) 00:13:49.490 fused_ordering(182) 00:13:49.490 fused_ordering(183) 00:13:49.490 fused_ordering(184) 00:13:49.490 fused_ordering(185) 00:13:49.490 fused_ordering(186) 00:13:49.490 fused_ordering(187) 00:13:49.490 fused_ordering(188) 00:13:49.490 fused_ordering(189) 00:13:49.490 fused_ordering(190) 00:13:49.490 fused_ordering(191) 00:13:49.490 fused_ordering(192) 00:13:49.490 fused_ordering(193) 00:13:49.490 fused_ordering(194) 00:13:49.490 fused_ordering(195) 00:13:49.490 fused_ordering(196) 00:13:49.490 fused_ordering(197) 00:13:49.490 fused_ordering(198) 00:13:49.490 fused_ordering(199) 00:13:49.490 fused_ordering(200) 00:13:49.490 fused_ordering(201) 00:13:49.490 fused_ordering(202) 00:13:49.490 fused_ordering(203) 00:13:49.490 fused_ordering(204) 00:13:49.490 fused_ordering(205) 00:13:49.490 fused_ordering(206) 00:13:49.490 fused_ordering(207) 00:13:49.490 fused_ordering(208) 00:13:49.490 fused_ordering(209) 00:13:49.490 fused_ordering(210) 00:13:49.490 fused_ordering(211) 00:13:49.490 fused_ordering(212) 00:13:49.490 fused_ordering(213) 00:13:49.490 fused_ordering(214) 00:13:49.490 fused_ordering(215) 00:13:49.490 fused_ordering(216) 00:13:49.490 fused_ordering(217) 00:13:49.490 fused_ordering(218) 00:13:49.490 fused_ordering(219) 00:13:49.490 fused_ordering(220) 00:13:49.490 fused_ordering(221) 00:13:49.490 fused_ordering(222) 00:13:49.490 fused_ordering(223) 00:13:49.490 fused_ordering(224) 00:13:49.490 fused_ordering(225) 00:13:49.490 fused_ordering(226) 00:13:49.490 fused_ordering(227) 00:13:49.490 fused_ordering(228) 00:13:49.490 fused_ordering(229) 00:13:49.490 fused_ordering(230) 00:13:49.490 fused_ordering(231) 00:13:49.490 fused_ordering(232) 00:13:49.490 fused_ordering(233) 00:13:49.490 fused_ordering(234) 00:13:49.490 fused_ordering(235) 00:13:49.490 fused_ordering(236) 00:13:49.490 fused_ordering(237) 00:13:49.490 fused_ordering(238) 00:13:49.490 fused_ordering(239) 00:13:49.490 fused_ordering(240) 00:13:49.490 fused_ordering(241) 00:13:49.490 fused_ordering(242) 00:13:49.490 fused_ordering(243) 00:13:49.490 fused_ordering(244) 00:13:49.490 fused_ordering(245) 00:13:49.490 fused_ordering(246) 00:13:49.490 fused_ordering(247) 00:13:49.490 fused_ordering(248) 00:13:49.490 fused_ordering(249) 00:13:49.490 fused_ordering(250) 00:13:49.490 fused_ordering(251) 00:13:49.490 fused_ordering(252) 00:13:49.490 fused_ordering(253) 00:13:49.490 fused_ordering(254) 00:13:49.490 fused_ordering(255) 00:13:49.490 fused_ordering(256) 00:13:49.490 fused_ordering(257) 00:13:49.490 fused_ordering(258) 00:13:49.490 fused_ordering(259) 00:13:49.490 fused_ordering(260) 00:13:49.490 fused_ordering(261) 00:13:49.490 fused_ordering(262) 00:13:49.490 fused_ordering(263) 00:13:49.490 fused_ordering(264) 00:13:49.490 fused_ordering(265) 00:13:49.490 fused_ordering(266) 00:13:49.490 fused_ordering(267) 00:13:49.490 fused_ordering(268) 00:13:49.490 fused_ordering(269) 00:13:49.490 fused_ordering(270) 00:13:49.490 fused_ordering(271) 00:13:49.490 fused_ordering(272) 00:13:49.490 fused_ordering(273) 00:13:49.490 fused_ordering(274) 00:13:49.490 fused_ordering(275) 00:13:49.490 fused_ordering(276) 00:13:49.490 fused_ordering(277) 00:13:49.490 fused_ordering(278) 00:13:49.490 fused_ordering(279) 00:13:49.490 fused_ordering(280) 00:13:49.490 fused_ordering(281) 00:13:49.490 fused_ordering(282) 00:13:49.490 fused_ordering(283) 00:13:49.490 fused_ordering(284) 00:13:49.490 fused_ordering(285) 00:13:49.490 fused_ordering(286) 00:13:49.490 fused_ordering(287) 00:13:49.490 fused_ordering(288) 00:13:49.490 fused_ordering(289) 00:13:49.490 fused_ordering(290) 00:13:49.490 fused_ordering(291) 00:13:49.490 fused_ordering(292) 00:13:49.490 fused_ordering(293) 00:13:49.490 fused_ordering(294) 00:13:49.490 fused_ordering(295) 00:13:49.490 fused_ordering(296) 00:13:49.490 fused_ordering(297) 00:13:49.490 fused_ordering(298) 00:13:49.490 fused_ordering(299) 00:13:49.490 fused_ordering(300) 00:13:49.490 fused_ordering(301) 00:13:49.490 fused_ordering(302) 00:13:49.490 fused_ordering(303) 00:13:49.490 fused_ordering(304) 00:13:49.490 fused_ordering(305) 00:13:49.490 fused_ordering(306) 00:13:49.490 fused_ordering(307) 00:13:49.490 fused_ordering(308) 00:13:49.490 fused_ordering(309) 00:13:49.490 fused_ordering(310) 00:13:49.490 fused_ordering(311) 00:13:49.490 fused_ordering(312) 00:13:49.490 fused_ordering(313) 00:13:49.490 fused_ordering(314) 00:13:49.490 fused_ordering(315) 00:13:49.490 fused_ordering(316) 00:13:49.490 fused_ordering(317) 00:13:49.490 fused_ordering(318) 00:13:49.490 fused_ordering(319) 00:13:49.490 fused_ordering(320) 00:13:49.490 fused_ordering(321) 00:13:49.490 fused_ordering(322) 00:13:49.490 fused_ordering(323) 00:13:49.490 fused_ordering(324) 00:13:49.490 fused_ordering(325) 00:13:49.490 fused_ordering(326) 00:13:49.490 fused_ordering(327) 00:13:49.490 fused_ordering(328) 00:13:49.490 fused_ordering(329) 00:13:49.490 fused_ordering(330) 00:13:49.490 fused_ordering(331) 00:13:49.490 fused_ordering(332) 00:13:49.490 fused_ordering(333) 00:13:49.490 fused_ordering(334) 00:13:49.490 fused_ordering(335) 00:13:49.490 fused_ordering(336) 00:13:49.490 fused_ordering(337) 00:13:49.490 fused_ordering(338) 00:13:49.490 fused_ordering(339) 00:13:49.490 fused_ordering(340) 00:13:49.490 fused_ordering(341) 00:13:49.490 fused_ordering(342) 00:13:49.490 fused_ordering(343) 00:13:49.490 fused_ordering(344) 00:13:49.490 fused_ordering(345) 00:13:49.490 fused_ordering(346) 00:13:49.490 fused_ordering(347) 00:13:49.490 fused_ordering(348) 00:13:49.490 fused_ordering(349) 00:13:49.490 fused_ordering(350) 00:13:49.490 fused_ordering(351) 00:13:49.490 fused_ordering(352) 00:13:49.490 fused_ordering(353) 00:13:49.490 fused_ordering(354) 00:13:49.490 fused_ordering(355) 00:13:49.490 fused_ordering(356) 00:13:49.490 fused_ordering(357) 00:13:49.490 fused_ordering(358) 00:13:49.490 fused_ordering(359) 00:13:49.490 fused_ordering(360) 00:13:49.490 fused_ordering(361) 00:13:49.490 fused_ordering(362) 00:13:49.490 fused_ordering(363) 00:13:49.490 fused_ordering(364) 00:13:49.490 fused_ordering(365) 00:13:49.490 fused_ordering(366) 00:13:49.490 fused_ordering(367) 00:13:49.490 fused_ordering(368) 00:13:49.490 fused_ordering(369) 00:13:49.490 fused_ordering(370) 00:13:49.490 fused_ordering(371) 00:13:49.490 fused_ordering(372) 00:13:49.490 fused_ordering(373) 00:13:49.490 fused_ordering(374) 00:13:49.490 fused_ordering(375) 00:13:49.490 fused_ordering(376) 00:13:49.490 fused_ordering(377) 00:13:49.490 fused_ordering(378) 00:13:49.490 fused_ordering(379) 00:13:49.490 fused_ordering(380) 00:13:49.490 fused_ordering(381) 00:13:49.490 fused_ordering(382) 00:13:49.490 fused_ordering(383) 00:13:49.490 fused_ordering(384) 00:13:49.490 fused_ordering(385) 00:13:49.490 fused_ordering(386) 00:13:49.490 fused_ordering(387) 00:13:49.490 fused_ordering(388) 00:13:49.490 fused_ordering(389) 00:13:49.490 fused_ordering(390) 00:13:49.490 fused_ordering(391) 00:13:49.490 fused_ordering(392) 00:13:49.490 fused_ordering(393) 00:13:49.490 fused_ordering(394) 00:13:49.490 fused_ordering(395) 00:13:49.490 fused_ordering(396) 00:13:49.490 fused_ordering(397) 00:13:49.490 fused_ordering(398) 00:13:49.490 fused_ordering(399) 00:13:49.490 fused_ordering(400) 00:13:49.490 fused_ordering(401) 00:13:49.490 fused_ordering(402) 00:13:49.490 fused_ordering(403) 00:13:49.490 fused_ordering(404) 00:13:49.490 fused_ordering(405) 00:13:49.490 fused_ordering(406) 00:13:49.490 fused_ordering(407) 00:13:49.490 fused_ordering(408) 00:13:49.490 fused_ordering(409) 00:13:49.490 fused_ordering(410) 00:13:49.749 fused_ordering(411) 00:13:49.749 fused_ordering(412) 00:13:49.749 fused_ordering(413) 00:13:49.749 fused_ordering(414) 00:13:49.749 fused_ordering(415) 00:13:49.749 fused_ordering(416) 00:13:49.749 fused_ordering(417) 00:13:49.749 fused_ordering(418) 00:13:49.749 fused_ordering(419) 00:13:49.749 fused_ordering(420) 00:13:49.749 fused_ordering(421) 00:13:49.749 fused_ordering(422) 00:13:49.749 fused_ordering(423) 00:13:49.749 fused_ordering(424) 00:13:49.749 fused_ordering(425) 00:13:49.749 fused_ordering(426) 00:13:49.749 fused_ordering(427) 00:13:49.749 fused_ordering(428) 00:13:49.749 fused_ordering(429) 00:13:49.749 fused_ordering(430) 00:13:49.749 fused_ordering(431) 00:13:49.749 fused_ordering(432) 00:13:49.749 fused_ordering(433) 00:13:49.749 fused_ordering(434) 00:13:49.749 fused_ordering(435) 00:13:49.749 fused_ordering(436) 00:13:49.749 fused_ordering(437) 00:13:49.749 fused_ordering(438) 00:13:49.749 fused_ordering(439) 00:13:49.749 fused_ordering(440) 00:13:49.749 fused_ordering(441) 00:13:49.749 fused_ordering(442) 00:13:49.749 fused_ordering(443) 00:13:49.749 fused_ordering(444) 00:13:49.749 fused_ordering(445) 00:13:49.749 fused_ordering(446) 00:13:49.749 fused_ordering(447) 00:13:49.749 fused_ordering(448) 00:13:49.749 fused_ordering(449) 00:13:49.749 fused_ordering(450) 00:13:49.749 fused_ordering(451) 00:13:49.749 fused_ordering(452) 00:13:49.749 fused_ordering(453) 00:13:49.749 fused_ordering(454) 00:13:49.749 fused_ordering(455) 00:13:49.749 fused_ordering(456) 00:13:49.749 fused_ordering(457) 00:13:49.749 fused_ordering(458) 00:13:49.749 fused_ordering(459) 00:13:49.749 fused_ordering(460) 00:13:49.749 fused_ordering(461) 00:13:49.749 fused_ordering(462) 00:13:49.749 fused_ordering(463) 00:13:49.749 fused_ordering(464) 00:13:49.749 fused_ordering(465) 00:13:49.750 fused_ordering(466) 00:13:49.750 fused_ordering(467) 00:13:49.750 fused_ordering(468) 00:13:49.750 fused_ordering(469) 00:13:49.750 fused_ordering(470) 00:13:49.750 fused_ordering(471) 00:13:49.750 fused_ordering(472) 00:13:49.750 fused_ordering(473) 00:13:49.750 fused_ordering(474) 00:13:49.750 fused_ordering(475) 00:13:49.750 fused_ordering(476) 00:13:49.750 fused_ordering(477) 00:13:49.750 fused_ordering(478) 00:13:49.750 fused_ordering(479) 00:13:49.750 fused_ordering(480) 00:13:49.750 fused_ordering(481) 00:13:49.750 fused_ordering(482) 00:13:49.750 fused_ordering(483) 00:13:49.750 fused_ordering(484) 00:13:49.750 fused_ordering(485) 00:13:49.750 fused_ordering(486) 00:13:49.750 fused_ordering(487) 00:13:49.750 fused_ordering(488) 00:13:49.750 fused_ordering(489) 00:13:49.750 fused_ordering(490) 00:13:49.750 fused_ordering(491) 00:13:49.750 fused_ordering(492) 00:13:49.750 fused_ordering(493) 00:13:49.750 fused_ordering(494) 00:13:49.750 fused_ordering(495) 00:13:49.750 fused_ordering(496) 00:13:49.750 fused_ordering(497) 00:13:49.750 fused_ordering(498) 00:13:49.750 fused_ordering(499) 00:13:49.750 fused_ordering(500) 00:13:49.750 fused_ordering(501) 00:13:49.750 fused_ordering(502) 00:13:49.750 fused_ordering(503) 00:13:49.750 fused_ordering(504) 00:13:49.750 fused_ordering(505) 00:13:49.750 fused_ordering(506) 00:13:49.750 fused_ordering(507) 00:13:49.750 fused_ordering(508) 00:13:49.750 fused_ordering(509) 00:13:49.750 fused_ordering(510) 00:13:49.750 fused_ordering(511) 00:13:49.750 fused_ordering(512) 00:13:49.750 fused_ordering(513) 00:13:49.750 fused_ordering(514) 00:13:49.750 fused_ordering(515) 00:13:49.750 fused_ordering(516) 00:13:49.750 fused_ordering(517) 00:13:49.750 fused_ordering(518) 00:13:49.750 fused_ordering(519) 00:13:49.750 fused_ordering(520) 00:13:49.750 fused_ordering(521) 00:13:49.750 fused_ordering(522) 00:13:49.750 fused_ordering(523) 00:13:49.750 fused_ordering(524) 00:13:49.750 fused_ordering(525) 00:13:49.750 fused_ordering(526) 00:13:49.750 fused_ordering(527) 00:13:49.750 fused_ordering(528) 00:13:49.750 fused_ordering(529) 00:13:49.750 fused_ordering(530) 00:13:49.750 fused_ordering(531) 00:13:49.750 fused_ordering(532) 00:13:49.750 fused_ordering(533) 00:13:49.750 fused_ordering(534) 00:13:49.750 fused_ordering(535) 00:13:49.750 fused_ordering(536) 00:13:49.750 fused_ordering(537) 00:13:49.750 fused_ordering(538) 00:13:49.750 fused_ordering(539) 00:13:49.750 fused_ordering(540) 00:13:49.750 fused_ordering(541) 00:13:49.750 fused_ordering(542) 00:13:49.750 fused_ordering(543) 00:13:49.750 fused_ordering(544) 00:13:49.750 fused_ordering(545) 00:13:49.750 fused_ordering(546) 00:13:49.750 fused_ordering(547) 00:13:49.750 fused_ordering(548) 00:13:49.750 fused_ordering(549) 00:13:49.750 fused_ordering(550) 00:13:49.750 fused_ordering(551) 00:13:49.750 fused_ordering(552) 00:13:49.750 fused_ordering(553) 00:13:49.750 fused_ordering(554) 00:13:49.750 fused_ordering(555) 00:13:49.750 fused_ordering(556) 00:13:49.750 fused_ordering(557) 00:13:49.750 fused_ordering(558) 00:13:49.750 fused_ordering(559) 00:13:49.750 fused_ordering(560) 00:13:49.750 fused_ordering(561) 00:13:49.750 fused_ordering(562) 00:13:49.750 fused_ordering(563) 00:13:49.750 fused_ordering(564) 00:13:49.750 fused_ordering(565) 00:13:49.750 fused_ordering(566) 00:13:49.750 fused_ordering(567) 00:13:49.750 fused_ordering(568) 00:13:49.750 fused_ordering(569) 00:13:49.750 fused_ordering(570) 00:13:49.750 fused_ordering(571) 00:13:49.750 fused_ordering(572) 00:13:49.750 fused_ordering(573) 00:13:49.750 fused_ordering(574) 00:13:49.750 fused_ordering(575) 00:13:49.750 fused_ordering(576) 00:13:49.750 fused_ordering(577) 00:13:49.750 fused_ordering(578) 00:13:49.750 fused_ordering(579) 00:13:49.750 fused_ordering(580) 00:13:49.750 fused_ordering(581) 00:13:49.750 fused_ordering(582) 00:13:49.750 fused_ordering(583) 00:13:49.750 fused_ordering(584) 00:13:49.750 fused_ordering(585) 00:13:49.750 fused_ordering(586) 00:13:49.750 fused_ordering(587) 00:13:49.750 fused_ordering(588) 00:13:49.750 fused_ordering(589) 00:13:49.750 fused_ordering(590) 00:13:49.750 fused_ordering(591) 00:13:49.750 fused_ordering(592) 00:13:49.750 fused_ordering(593) 00:13:49.750 fused_ordering(594) 00:13:49.750 fused_ordering(595) 00:13:49.750 fused_ordering(596) 00:13:49.750 fused_ordering(597) 00:13:49.750 fused_ordering(598) 00:13:49.750 fused_ordering(599) 00:13:49.750 fused_ordering(600) 00:13:49.750 fused_ordering(601) 00:13:49.750 fused_ordering(602) 00:13:49.750 fused_ordering(603) 00:13:49.750 fused_ordering(604) 00:13:49.750 fused_ordering(605) 00:13:49.750 fused_ordering(606) 00:13:49.750 fused_ordering(607) 00:13:49.750 fused_ordering(608) 00:13:49.750 fused_ordering(609) 00:13:49.750 fused_ordering(610) 00:13:49.750 fused_ordering(611) 00:13:49.750 fused_ordering(612) 00:13:49.750 fused_ordering(613) 00:13:49.750 fused_ordering(614) 00:13:49.750 fused_ordering(615) 00:13:50.316 fused_ordering(616) 00:13:50.316 fused_ordering(617) 00:13:50.316 fused_ordering(618) 00:13:50.316 fused_ordering(619) 00:13:50.316 fused_ordering(620) 00:13:50.316 fused_ordering(621) 00:13:50.316 fused_ordering(622) 00:13:50.316 fused_ordering(623) 00:13:50.316 fused_ordering(624) 00:13:50.316 fused_ordering(625) 00:13:50.316 fused_ordering(626) 00:13:50.316 fused_ordering(627) 00:13:50.316 fused_ordering(628) 00:13:50.316 fused_ordering(629) 00:13:50.316 fused_ordering(630) 00:13:50.316 fused_ordering(631) 00:13:50.316 fused_ordering(632) 00:13:50.316 fused_ordering(633) 00:13:50.316 fused_ordering(634) 00:13:50.316 fused_ordering(635) 00:13:50.316 fused_ordering(636) 00:13:50.316 fused_ordering(637) 00:13:50.316 fused_ordering(638) 00:13:50.316 fused_ordering(639) 00:13:50.316 fused_ordering(640) 00:13:50.317 fused_ordering(641) 00:13:50.317 fused_ordering(642) 00:13:50.317 fused_ordering(643) 00:13:50.317 fused_ordering(644) 00:13:50.317 fused_ordering(645) 00:13:50.317 fused_ordering(646) 00:13:50.317 fused_ordering(647) 00:13:50.317 fused_ordering(648) 00:13:50.317 fused_ordering(649) 00:13:50.317 fused_ordering(650) 00:13:50.317 fused_ordering(651) 00:13:50.317 fused_ordering(652) 00:13:50.317 fused_ordering(653) 00:13:50.317 fused_ordering(654) 00:13:50.317 fused_ordering(655) 00:13:50.317 fused_ordering(656) 00:13:50.317 fused_ordering(657) 00:13:50.317 fused_ordering(658) 00:13:50.317 fused_ordering(659) 00:13:50.317 fused_ordering(660) 00:13:50.317 fused_ordering(661) 00:13:50.317 fused_ordering(662) 00:13:50.317 fused_ordering(663) 00:13:50.317 fused_ordering(664) 00:13:50.317 fused_ordering(665) 00:13:50.317 fused_ordering(666) 00:13:50.317 fused_ordering(667) 00:13:50.317 fused_ordering(668) 00:13:50.317 fused_ordering(669) 00:13:50.317 fused_ordering(670) 00:13:50.317 fused_ordering(671) 00:13:50.317 fused_ordering(672) 00:13:50.317 fused_ordering(673) 00:13:50.317 fused_ordering(674) 00:13:50.317 fused_ordering(675) 00:13:50.317 fused_ordering(676) 00:13:50.317 fused_ordering(677) 00:13:50.317 fused_ordering(678) 00:13:50.317 fused_ordering(679) 00:13:50.317 fused_ordering(680) 00:13:50.317 fused_ordering(681) 00:13:50.317 fused_ordering(682) 00:13:50.317 fused_ordering(683) 00:13:50.317 fused_ordering(684) 00:13:50.317 fused_ordering(685) 00:13:50.317 fused_ordering(686) 00:13:50.317 fused_ordering(687) 00:13:50.317 fused_ordering(688) 00:13:50.317 fused_ordering(689) 00:13:50.317 fused_ordering(690) 00:13:50.317 fused_ordering(691) 00:13:50.317 fused_ordering(692) 00:13:50.317 fused_ordering(693) 00:13:50.317 fused_ordering(694) 00:13:50.317 fused_ordering(695) 00:13:50.317 fused_ordering(696) 00:13:50.317 fused_ordering(697) 00:13:50.317 fused_ordering(698) 00:13:50.317 fused_ordering(699) 00:13:50.317 fused_ordering(700) 00:13:50.317 fused_ordering(701) 00:13:50.317 fused_ordering(702) 00:13:50.317 fused_ordering(703) 00:13:50.317 fused_ordering(704) 00:13:50.317 fused_ordering(705) 00:13:50.317 fused_ordering(706) 00:13:50.317 fused_ordering(707) 00:13:50.317 fused_ordering(708) 00:13:50.317 fused_ordering(709) 00:13:50.317 fused_ordering(710) 00:13:50.317 fused_ordering(711) 00:13:50.317 fused_ordering(712) 00:13:50.317 fused_ordering(713) 00:13:50.317 fused_ordering(714) 00:13:50.317 fused_ordering(715) 00:13:50.317 fused_ordering(716) 00:13:50.317 fused_ordering(717) 00:13:50.317 fused_ordering(718) 00:13:50.317 fused_ordering(719) 00:13:50.317 fused_ordering(720) 00:13:50.317 fused_ordering(721) 00:13:50.317 fused_ordering(722) 00:13:50.317 fused_ordering(723) 00:13:50.317 fused_ordering(724) 00:13:50.317 fused_ordering(725) 00:13:50.317 fused_ordering(726) 00:13:50.317 fused_ordering(727) 00:13:50.317 fused_ordering(728) 00:13:50.317 fused_ordering(729) 00:13:50.317 fused_ordering(730) 00:13:50.317 fused_ordering(731) 00:13:50.317 fused_ordering(732) 00:13:50.317 fused_ordering(733) 00:13:50.317 fused_ordering(734) 00:13:50.317 fused_ordering(735) 00:13:50.317 fused_ordering(736) 00:13:50.317 fused_ordering(737) 00:13:50.317 fused_ordering(738) 00:13:50.317 fused_ordering(739) 00:13:50.317 fused_ordering(740) 00:13:50.317 fused_ordering(741) 00:13:50.317 fused_ordering(742) 00:13:50.317 fused_ordering(743) 00:13:50.317 fused_ordering(744) 00:13:50.317 fused_ordering(745) 00:13:50.317 fused_ordering(746) 00:13:50.317 fused_ordering(747) 00:13:50.317 fused_ordering(748) 00:13:50.317 fused_ordering(749) 00:13:50.317 fused_ordering(750) 00:13:50.317 fused_ordering(751) 00:13:50.317 fused_ordering(752) 00:13:50.317 fused_ordering(753) 00:13:50.317 fused_ordering(754) 00:13:50.317 fused_ordering(755) 00:13:50.317 fused_ordering(756) 00:13:50.317 fused_ordering(757) 00:13:50.317 fused_ordering(758) 00:13:50.317 fused_ordering(759) 00:13:50.317 fused_ordering(760) 00:13:50.317 fused_ordering(761) 00:13:50.317 fused_ordering(762) 00:13:50.317 fused_ordering(763) 00:13:50.317 fused_ordering(764) 00:13:50.317 fused_ordering(765) 00:13:50.317 fused_ordering(766) 00:13:50.317 fused_ordering(767) 00:13:50.317 fused_ordering(768) 00:13:50.317 fused_ordering(769) 00:13:50.317 fused_ordering(770) 00:13:50.317 fused_ordering(771) 00:13:50.317 fused_ordering(772) 00:13:50.317 fused_ordering(773) 00:13:50.317 fused_ordering(774) 00:13:50.317 fused_ordering(775) 00:13:50.317 fused_ordering(776) 00:13:50.317 fused_ordering(777) 00:13:50.317 fused_ordering(778) 00:13:50.317 fused_ordering(779) 00:13:50.317 fused_ordering(780) 00:13:50.317 fused_ordering(781) 00:13:50.317 fused_ordering(782) 00:13:50.317 fused_ordering(783) 00:13:50.317 fused_ordering(784) 00:13:50.317 fused_ordering(785) 00:13:50.317 fused_ordering(786) 00:13:50.317 fused_ordering(787) 00:13:50.317 fused_ordering(788) 00:13:50.317 fused_ordering(789) 00:13:50.317 fused_ordering(790) 00:13:50.317 fused_ordering(791) 00:13:50.317 fused_ordering(792) 00:13:50.317 fused_ordering(793) 00:13:50.317 fused_ordering(794) 00:13:50.317 fused_ordering(795) 00:13:50.317 fused_ordering(796) 00:13:50.317 fused_ordering(797) 00:13:50.317 fused_ordering(798) 00:13:50.317 fused_ordering(799) 00:13:50.317 fused_ordering(800) 00:13:50.317 fused_ordering(801) 00:13:50.317 fused_ordering(802) 00:13:50.317 fused_ordering(803) 00:13:50.317 fused_ordering(804) 00:13:50.317 fused_ordering(805) 00:13:50.317 fused_ordering(806) 00:13:50.317 fused_ordering(807) 00:13:50.317 fused_ordering(808) 00:13:50.317 fused_ordering(809) 00:13:50.317 fused_ordering(810) 00:13:50.317 fused_ordering(811) 00:13:50.317 fused_ordering(812) 00:13:50.317 fused_ordering(813) 00:13:50.317 fused_ordering(814) 00:13:50.317 fused_ordering(815) 00:13:50.317 fused_ordering(816) 00:13:50.317 fused_ordering(817) 00:13:50.317 fused_ordering(818) 00:13:50.317 fused_ordering(819) 00:13:50.317 fused_ordering(820) 00:13:50.882 fused_ordering(821) 00:13:50.882 fused_ordering(822) 00:13:50.882 fused_ordering(823) 00:13:50.882 fused_ordering(824) 00:13:50.882 fused_ordering(825) 00:13:50.882 fused_ordering(826) 00:13:50.882 fused_ordering(827) 00:13:50.882 fused_ordering(828) 00:13:50.882 fused_ordering(829) 00:13:50.882 fused_ordering(830) 00:13:50.882 fused_ordering(831) 00:13:50.882 fused_ordering(832) 00:13:50.882 fused_ordering(833) 00:13:50.882 fused_ordering(834) 00:13:50.882 fused_ordering(835) 00:13:50.882 fused_ordering(836) 00:13:50.882 fused_ordering(837) 00:13:50.882 fused_ordering(838) 00:13:50.882 fused_ordering(839) 00:13:50.882 fused_ordering(840) 00:13:50.882 fused_ordering(841) 00:13:50.882 fused_ordering(842) 00:13:50.882 fused_ordering(843) 00:13:50.882 fused_ordering(844) 00:13:50.882 fused_ordering(845) 00:13:50.882 fused_ordering(846) 00:13:50.882 fused_ordering(847) 00:13:50.882 fused_ordering(848) 00:13:50.882 fused_ordering(849) 00:13:50.882 fused_ordering(850) 00:13:50.882 fused_ordering(851) 00:13:50.882 fused_ordering(852) 00:13:50.882 fused_ordering(853) 00:13:50.882 fused_ordering(854) 00:13:50.882 fused_ordering(855) 00:13:50.882 fused_ordering(856) 00:13:50.882 fused_ordering(857) 00:13:50.882 fused_ordering(858) 00:13:50.882 fused_ordering(859) 00:13:50.882 fused_ordering(860) 00:13:50.882 fused_ordering(861) 00:13:50.882 fused_ordering(862) 00:13:50.882 fused_ordering(863) 00:13:50.882 fused_ordering(864) 00:13:50.882 fused_ordering(865) 00:13:50.882 fused_ordering(866) 00:13:50.882 fused_ordering(867) 00:13:50.882 fused_ordering(868) 00:13:50.882 fused_ordering(869) 00:13:50.882 fused_ordering(870) 00:13:50.882 fused_ordering(871) 00:13:50.882 fused_ordering(872) 00:13:50.882 fused_ordering(873) 00:13:50.882 fused_ordering(874) 00:13:50.882 fused_ordering(875) 00:13:50.882 fused_ordering(876) 00:13:50.882 fused_ordering(877) 00:13:50.882 fused_ordering(878) 00:13:50.882 fused_ordering(879) 00:13:50.882 fused_ordering(880) 00:13:50.882 fused_ordering(881) 00:13:50.882 fused_ordering(882) 00:13:50.882 fused_ordering(883) 00:13:50.882 fused_ordering(884) 00:13:50.882 fused_ordering(885) 00:13:50.882 fused_ordering(886) 00:13:50.882 fused_ordering(887) 00:13:50.882 fused_ordering(888) 00:13:50.882 fused_ordering(889) 00:13:50.882 fused_ordering(890) 00:13:50.882 fused_ordering(891) 00:13:50.882 fused_ordering(892) 00:13:50.882 fused_ordering(893) 00:13:50.882 fused_ordering(894) 00:13:50.882 fused_ordering(895) 00:13:50.882 fused_ordering(896) 00:13:50.882 fused_ordering(897) 00:13:50.882 fused_ordering(898) 00:13:50.882 fused_ordering(899) 00:13:50.882 fused_ordering(900) 00:13:50.882 fused_ordering(901) 00:13:50.882 fused_ordering(902) 00:13:50.882 fused_ordering(903) 00:13:50.882 fused_ordering(904) 00:13:50.882 fused_ordering(905) 00:13:50.882 fused_ordering(906) 00:13:50.882 fused_ordering(907) 00:13:50.882 fused_ordering(908) 00:13:50.882 fused_ordering(909) 00:13:50.882 fused_ordering(910) 00:13:50.882 fused_ordering(911) 00:13:50.882 fused_ordering(912) 00:13:50.882 fused_ordering(913) 00:13:50.882 fused_ordering(914) 00:13:50.882 fused_ordering(915) 00:13:50.882 fused_ordering(916) 00:13:50.882 fused_ordering(917) 00:13:50.882 fused_ordering(918) 00:13:50.882 fused_ordering(919) 00:13:50.882 fused_ordering(920) 00:13:50.882 fused_ordering(921) 00:13:50.882 fused_ordering(922) 00:13:50.882 fused_ordering(923) 00:13:50.882 fused_ordering(924) 00:13:50.882 fused_ordering(925) 00:13:50.882 fused_ordering(926) 00:13:50.882 fused_ordering(927) 00:13:50.882 fused_ordering(928) 00:13:50.882 fused_ordering(929) 00:13:50.882 fused_ordering(930) 00:13:50.882 fused_ordering(931) 00:13:50.882 fused_ordering(932) 00:13:50.882 fused_ordering(933) 00:13:50.882 fused_ordering(934) 00:13:50.882 fused_ordering(935) 00:13:50.882 fused_ordering(936) 00:13:50.882 fused_ordering(937) 00:13:50.882 fused_ordering(938) 00:13:50.882 fused_ordering(939) 00:13:50.882 fused_ordering(940) 00:13:50.882 fused_ordering(941) 00:13:50.882 fused_ordering(942) 00:13:50.882 fused_ordering(943) 00:13:50.882 fused_ordering(944) 00:13:50.882 fused_ordering(945) 00:13:50.882 fused_ordering(946) 00:13:50.882 fused_ordering(947) 00:13:50.882 fused_ordering(948) 00:13:50.882 fused_ordering(949) 00:13:50.882 fused_ordering(950) 00:13:50.882 fused_ordering(951) 00:13:50.882 fused_ordering(952) 00:13:50.882 fused_ordering(953) 00:13:50.882 fused_ordering(954) 00:13:50.882 fused_ordering(955) 00:13:50.882 fused_ordering(956) 00:13:50.882 fused_ordering(957) 00:13:50.882 fused_ordering(958) 00:13:50.882 fused_ordering(959) 00:13:50.882 fused_ordering(960) 00:13:50.882 fused_ordering(961) 00:13:50.882 fused_ordering(962) 00:13:50.882 fused_ordering(963) 00:13:50.882 fused_ordering(964) 00:13:50.882 fused_ordering(965) 00:13:50.882 fused_ordering(966) 00:13:50.882 fused_ordering(967) 00:13:50.882 fused_ordering(968) 00:13:50.882 fused_ordering(969) 00:13:50.882 fused_ordering(970) 00:13:50.882 fused_ordering(971) 00:13:50.882 fused_ordering(972) 00:13:50.882 fused_ordering(973) 00:13:50.882 fused_ordering(974) 00:13:50.882 fused_ordering(975) 00:13:50.882 fused_ordering(976) 00:13:50.882 fused_ordering(977) 00:13:50.882 fused_ordering(978) 00:13:50.882 fused_ordering(979) 00:13:50.882 fused_ordering(980) 00:13:50.882 fused_ordering(981) 00:13:50.882 fused_ordering(982) 00:13:50.882 fused_ordering(983) 00:13:50.882 fused_ordering(984) 00:13:50.882 fused_ordering(985) 00:13:50.882 fused_ordering(986) 00:13:50.882 fused_ordering(987) 00:13:50.882 fused_ordering(988) 00:13:50.882 fused_ordering(989) 00:13:50.882 fused_ordering(990) 00:13:50.882 fused_ordering(991) 00:13:50.882 fused_ordering(992) 00:13:50.882 fused_ordering(993) 00:13:50.882 fused_ordering(994) 00:13:50.882 fused_ordering(995) 00:13:50.882 fused_ordering(996) 00:13:50.882 fused_ordering(997) 00:13:50.882 fused_ordering(998) 00:13:50.882 fused_ordering(999) 00:13:50.882 fused_ordering(1000) 00:13:50.882 fused_ordering(1001) 00:13:50.882 fused_ordering(1002) 00:13:50.882 fused_ordering(1003) 00:13:50.882 fused_ordering(1004) 00:13:50.882 fused_ordering(1005) 00:13:50.882 fused_ordering(1006) 00:13:50.882 fused_ordering(1007) 00:13:50.882 fused_ordering(1008) 00:13:50.882 fused_ordering(1009) 00:13:50.882 fused_ordering(1010) 00:13:50.882 fused_ordering(1011) 00:13:50.882 fused_ordering(1012) 00:13:50.882 fused_ordering(1013) 00:13:50.882 fused_ordering(1014) 00:13:50.882 fused_ordering(1015) 00:13:50.882 fused_ordering(1016) 00:13:50.882 fused_ordering(1017) 00:13:50.882 fused_ordering(1018) 00:13:50.882 fused_ordering(1019) 00:13:50.882 fused_ordering(1020) 00:13:50.882 fused_ordering(1021) 00:13:50.882 fused_ordering(1022) 00:13:50.882 fused_ordering(1023) 00:13:50.882 04:09:52 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:50.882 04:09:52 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:50.882 04:09:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:50.882 04:09:52 -- nvmf/common.sh@116 -- # sync 00:13:50.882 04:09:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:50.882 04:09:52 -- nvmf/common.sh@119 -- # set +e 00:13:50.882 04:09:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:50.882 04:09:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:50.882 rmmod nvme_tcp 00:13:50.882 rmmod nvme_fabrics 00:13:50.883 rmmod nvme_keyring 00:13:50.883 04:09:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:50.883 04:09:52 -- nvmf/common.sh@123 -- # set -e 00:13:50.883 04:09:52 -- nvmf/common.sh@124 -- # return 0 00:13:50.883 04:09:52 -- nvmf/common.sh@477 -- # '[' -n 82306 ']' 00:13:50.883 04:09:52 -- nvmf/common.sh@478 -- # killprocess 82306 00:13:50.883 04:09:52 -- common/autotest_common.sh@936 -- # '[' -z 82306 ']' 00:13:50.883 04:09:52 -- common/autotest_common.sh@940 -- # kill -0 82306 00:13:50.883 04:09:52 -- common/autotest_common.sh@941 -- # uname 00:13:50.883 04:09:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:50.883 04:09:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82306 00:13:50.883 04:09:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:50.883 04:09:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:50.883 killing process with pid 82306 00:13:50.883 04:09:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82306' 00:13:50.883 04:09:52 -- common/autotest_common.sh@955 -- # kill 82306 00:13:50.883 04:09:52 -- common/autotest_common.sh@960 -- # wait 82306 00:13:51.141 04:09:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:51.141 04:09:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:51.141 04:09:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:51.141 04:09:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:51.141 04:09:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:51.141 04:09:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.141 04:09:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.141 04:09:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.141 04:09:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:51.141 00:13:51.141 real 0m3.724s 00:13:51.141 user 0m4.158s 00:13:51.141 sys 0m1.367s 00:13:51.141 04:09:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:51.141 04:09:52 -- common/autotest_common.sh@10 -- # set +x 00:13:51.141 ************************************ 00:13:51.141 END TEST nvmf_fused_ordering 00:13:51.141 ************************************ 00:13:51.141 04:09:52 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:51.141 04:09:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:51.141 04:09:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:51.141 04:09:52 -- common/autotest_common.sh@10 -- # set +x 00:13:51.141 ************************************ 00:13:51.141 START TEST nvmf_delete_subsystem 00:13:51.141 ************************************ 00:13:51.141 04:09:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:51.141 * Looking for test storage... 00:13:51.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:51.141 04:09:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:51.141 04:09:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:51.141 04:09:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:51.401 04:09:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:51.401 04:09:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:51.401 04:09:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:51.401 04:09:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:51.401 04:09:52 -- scripts/common.sh@335 -- # IFS=.-: 00:13:51.401 04:09:52 -- scripts/common.sh@335 -- # read -ra ver1 00:13:51.401 04:09:52 -- scripts/common.sh@336 -- # IFS=.-: 00:13:51.401 04:09:52 -- scripts/common.sh@336 -- # read -ra ver2 00:13:51.401 04:09:52 -- scripts/common.sh@337 -- # local 'op=<' 00:13:51.401 04:09:52 -- scripts/common.sh@339 -- # ver1_l=2 00:13:51.401 04:09:52 -- scripts/common.sh@340 -- # ver2_l=1 00:13:51.401 04:09:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:51.401 04:09:52 -- scripts/common.sh@343 -- # case "$op" in 00:13:51.401 04:09:52 -- scripts/common.sh@344 -- # : 1 00:13:51.401 04:09:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:51.401 04:09:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:51.401 04:09:52 -- scripts/common.sh@364 -- # decimal 1 00:13:51.401 04:09:52 -- scripts/common.sh@352 -- # local d=1 00:13:51.401 04:09:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:51.401 04:09:52 -- scripts/common.sh@354 -- # echo 1 00:13:51.401 04:09:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:51.401 04:09:52 -- scripts/common.sh@365 -- # decimal 2 00:13:51.401 04:09:52 -- scripts/common.sh@352 -- # local d=2 00:13:51.401 04:09:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:51.401 04:09:52 -- scripts/common.sh@354 -- # echo 2 00:13:51.401 04:09:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:51.401 04:09:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:51.401 04:09:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:51.401 04:09:52 -- scripts/common.sh@367 -- # return 0 00:13:51.401 04:09:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:51.401 04:09:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:51.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.401 --rc genhtml_branch_coverage=1 00:13:51.401 --rc genhtml_function_coverage=1 00:13:51.401 --rc genhtml_legend=1 00:13:51.401 --rc geninfo_all_blocks=1 00:13:51.401 --rc geninfo_unexecuted_blocks=1 00:13:51.401 00:13:51.401 ' 00:13:51.401 04:09:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:51.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.401 --rc genhtml_branch_coverage=1 00:13:51.401 --rc genhtml_function_coverage=1 00:13:51.401 --rc genhtml_legend=1 00:13:51.401 --rc geninfo_all_blocks=1 00:13:51.401 --rc geninfo_unexecuted_blocks=1 00:13:51.401 00:13:51.401 ' 00:13:51.401 04:09:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:51.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.401 --rc genhtml_branch_coverage=1 00:13:51.401 --rc genhtml_function_coverage=1 00:13:51.401 --rc genhtml_legend=1 00:13:51.401 --rc geninfo_all_blocks=1 00:13:51.401 --rc geninfo_unexecuted_blocks=1 00:13:51.401 00:13:51.401 ' 00:13:51.401 04:09:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:51.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.401 --rc genhtml_branch_coverage=1 00:13:51.401 --rc genhtml_function_coverage=1 00:13:51.401 --rc genhtml_legend=1 00:13:51.401 --rc geninfo_all_blocks=1 00:13:51.401 --rc geninfo_unexecuted_blocks=1 00:13:51.401 00:13:51.401 ' 00:13:51.401 04:09:52 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:51.401 04:09:52 -- nvmf/common.sh@7 -- # uname -s 00:13:51.401 04:09:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.401 04:09:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.401 04:09:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.401 04:09:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.401 04:09:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.401 04:09:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.401 04:09:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.401 04:09:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.401 04:09:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.401 04:09:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.401 04:09:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:13:51.401 04:09:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:13:51.401 04:09:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.401 04:09:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.401 04:09:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:51.401 04:09:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:51.401 04:09:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.401 04:09:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.401 04:09:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.401 04:09:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.401 04:09:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.401 04:09:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.401 04:09:52 -- paths/export.sh@5 -- # export PATH 00:13:51.401 04:09:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.401 04:09:52 -- nvmf/common.sh@46 -- # : 0 00:13:51.401 04:09:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:51.401 04:09:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:51.401 04:09:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:51.401 04:09:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.401 04:09:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.401 04:09:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:51.401 04:09:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:51.401 04:09:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:51.401 04:09:52 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:51.401 04:09:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:51.401 04:09:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.401 04:09:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:51.401 04:09:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:51.401 04:09:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:51.401 04:09:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.401 04:09:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.401 04:09:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.401 04:09:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:51.401 04:09:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:51.401 04:09:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:51.401 04:09:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:51.401 04:09:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:51.401 04:09:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:51.401 04:09:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:51.401 04:09:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:51.401 04:09:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:51.401 04:09:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:51.401 04:09:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:51.401 04:09:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:51.401 04:09:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:51.401 04:09:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:51.401 04:09:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:51.401 04:09:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:51.401 04:09:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:51.401 04:09:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:51.401 04:09:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:51.401 04:09:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:51.401 Cannot find device "nvmf_tgt_br" 00:13:51.401 04:09:52 -- nvmf/common.sh@154 -- # true 00:13:51.401 04:09:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:51.401 Cannot find device "nvmf_tgt_br2" 00:13:51.401 04:09:53 -- nvmf/common.sh@155 -- # true 00:13:51.401 04:09:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:51.401 04:09:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:51.401 Cannot find device "nvmf_tgt_br" 00:13:51.401 04:09:53 -- nvmf/common.sh@157 -- # true 00:13:51.401 04:09:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:51.401 Cannot find device "nvmf_tgt_br2" 00:13:51.401 04:09:53 -- nvmf/common.sh@158 -- # true 00:13:51.401 04:09:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:51.401 04:09:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:51.402 04:09:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:51.402 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:51.402 04:09:53 -- nvmf/common.sh@161 -- # true 00:13:51.402 04:09:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:51.402 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:51.402 04:09:53 -- nvmf/common.sh@162 -- # true 00:13:51.402 04:09:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:51.402 04:09:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:51.402 04:09:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:51.402 04:09:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:51.660 04:09:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:51.660 04:09:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:51.660 04:09:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:51.660 04:09:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:51.660 04:09:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:51.660 04:09:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:51.661 04:09:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:51.661 04:09:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:51.661 04:09:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:51.661 04:09:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:51.661 04:09:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:51.661 04:09:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:51.661 04:09:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:51.661 04:09:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:51.661 04:09:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:51.661 04:09:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:51.661 04:09:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:51.661 04:09:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:51.661 04:09:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:51.661 04:09:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:51.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:13:51.661 00:13:51.661 --- 10.0.0.2 ping statistics --- 00:13:51.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.661 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:13:51.661 04:09:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:51.661 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:51.661 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:13:51.661 00:13:51.661 --- 10.0.0.3 ping statistics --- 00:13:51.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.661 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:13:51.661 04:09:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:51.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:13:51.661 00:13:51.661 --- 10.0.0.1 ping statistics --- 00:13:51.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.661 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:13:51.661 04:09:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.661 04:09:53 -- nvmf/common.sh@421 -- # return 0 00:13:51.661 04:09:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:51.661 04:09:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.661 04:09:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:51.661 04:09:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:51.661 04:09:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.661 04:09:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:51.661 04:09:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:51.661 04:09:53 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:51.661 04:09:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:51.661 04:09:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:51.661 04:09:53 -- common/autotest_common.sh@10 -- # set +x 00:13:51.661 04:09:53 -- nvmf/common.sh@469 -- # nvmfpid=82545 00:13:51.661 04:09:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:51.661 04:09:53 -- nvmf/common.sh@470 -- # waitforlisten 82545 00:13:51.661 04:09:53 -- common/autotest_common.sh@829 -- # '[' -z 82545 ']' 00:13:51.661 04:09:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.661 04:09:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:51.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.661 04:09:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.661 04:09:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:51.661 04:09:53 -- common/autotest_common.sh@10 -- # set +x 00:13:51.661 [2024-11-26 04:09:53.405685] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:51.661 [2024-11-26 04:09:53.405794] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.920 [2024-11-26 04:09:53.547260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:51.920 [2024-11-26 04:09:53.622741] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:51.920 [2024-11-26 04:09:53.622942] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.920 [2024-11-26 04:09:53.622955] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.920 [2024-11-26 04:09:53.622964] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.920 [2024-11-26 04:09:53.623239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.920 [2024-11-26 04:09:53.623253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.869 04:09:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.869 04:09:54 -- common/autotest_common.sh@862 -- # return 0 00:13:52.869 04:09:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:52.869 04:09:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:52.869 04:09:54 -- common/autotest_common.sh@10 -- # set +x 00:13:52.869 04:09:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.869 04:09:54 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:52.869 04:09:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.869 04:09:54 -- common/autotest_common.sh@10 -- # set +x 00:13:52.869 [2024-11-26 04:09:54.458287] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.869 04:09:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.869 04:09:54 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:52.869 04:09:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.869 04:09:54 -- common/autotest_common.sh@10 -- # set +x 00:13:52.869 04:09:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.869 04:09:54 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.869 04:09:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.869 04:09:54 -- common/autotest_common.sh@10 -- # set +x 00:13:52.869 [2024-11-26 04:09:54.474491] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.869 04:09:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.869 04:09:54 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:52.869 04:09:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.869 04:09:54 -- common/autotest_common.sh@10 -- # set +x 00:13:52.869 NULL1 00:13:52.869 04:09:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.869 04:09:54 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:52.869 04:09:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.869 04:09:54 -- common/autotest_common.sh@10 -- # set +x 00:13:52.869 Delay0 00:13:52.869 04:09:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.869 04:09:54 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.869 04:09:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.869 04:09:54 -- common/autotest_common.sh@10 -- # set +x 00:13:52.869 04:09:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.869 04:09:54 -- target/delete_subsystem.sh@28 -- # perf_pid=82596 00:13:52.869 04:09:54 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:52.869 04:09:54 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:53.144 [2024-11-26 04:09:54.668986] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:55.048 04:09:56 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.048 04:09:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.048 04:09:56 -- common/autotest_common.sh@10 -- # set +x 00:13:55.048 Write completed with error (sct=0, sc=8) 00:13:55.048 Read completed with error (sct=0, sc=8) 00:13:55.048 Write completed with error (sct=0, sc=8) 00:13:55.048 starting I/O failed: -6 00:13:55.048 Read completed with error (sct=0, sc=8) 00:13:55.048 Write completed with error (sct=0, sc=8) 00:13:55.048 Read completed with error (sct=0, sc=8) 00:13:55.048 Read completed with error (sct=0, sc=8) 00:13:55.048 starting I/O failed: -6 00:13:55.048 Read completed with error (sct=0, sc=8) 00:13:55.048 Read completed with error (sct=0, sc=8) 00:13:55.048 Read completed with error (sct=0, sc=8) 00:13:55.048 Read completed with error (sct=0, sc=8) 00:13:55.048 starting I/O failed: -6 00:13:55.048 Write completed with error (sct=0, sc=8) 00:13:55.048 Write completed with error (sct=0, sc=8) 00:13:55.048 Read completed with error (sct=0, sc=8) 00:13:55.048 Read completed with error (sct=0, sc=8) 00:13:55.048 starting I/O failed: -6 00:13:55.048 Read completed with error (sct=0, sc=8) 00:13:55.048 Read completed with error (sct=0, sc=8) 00:13:55.048 Read completed with error (sct=0, sc=8) 00:13:55.048 Read completed with error (sct=0, sc=8) 00:13:55.048 starting I/O failed: -6 00:13:55.048 Write completed with error (sct=0, sc=8) 00:13:55.048 Read completed with error (sct=0, sc=8) 00:13:55.048 Read completed with error (sct=0, sc=8) 00:13:55.048 Write completed with error (sct=0, sc=8) 00:13:55.048 starting I/O failed: -6 00:13:55.048 Write completed with error (sct=0, sc=8) 00:13:55.048 Read completed with error (sct=0, sc=8) 00:13:55.048 Write completed with error (sct=0, sc=8) 00:13:55.048 Write completed with error (sct=0, sc=8) 00:13:55.048 starting I/O failed: -6 00:13:55.048 Write completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 starting I/O failed: -6 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 starting I/O failed: -6 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 starting I/O failed: -6 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 starting I/O failed: -6 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 starting I/O failed: -6 00:13:55.049 [2024-11-26 04:09:56.707598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d1870 is same with the state(5) to be set 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 [2024-11-26 04:09:56.708037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d1e70 is same with the state(5) to be set 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 starting I/O failed: -6 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 starting I/O failed: -6 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 starting I/O failed: -6 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 starting I/O failed: -6 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 starting I/O failed: -6 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 starting I/O failed: -6 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 starting I/O failed: -6 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 starting I/O failed: -6 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 starting I/O failed: -6 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 starting I/O failed: -6 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 [2024-11-26 04:09:56.712924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5a2000c350 is same with the state(5) to be set 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Write completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.049 Read completed with error (sct=0, sc=8) 00:13:55.984 [2024-11-26 04:09:57.683326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0070 is same with the state(5) to be set 00:13:55.984 Read completed with error (sct=0, sc=8) 00:13:55.984 Read completed with error (sct=0, sc=8) 00:13:55.984 Write completed with error (sct=0, sc=8) 00:13:55.984 Read completed with error (sct=0, sc=8) 00:13:55.984 Read completed with error (sct=0, sc=8) 00:13:55.984 Write completed with error (sct=0, sc=8) 00:13:55.984 Read completed with error (sct=0, sc=8) 00:13:55.984 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 [2024-11-26 04:09:57.708325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d1bc0 is same with the state(5) to be set 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 [2024-11-26 04:09:57.708695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d2120 is same with the state(5) to be set 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 [2024-11-26 04:09:57.711684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5a2000bf20 is same with the state(5) to be set 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 Write completed with error (sct=0, sc=8) 00:13:55.985 Read completed with error (sct=0, sc=8) 00:13:55.985 [2024-11-26 04:09:57.712314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5a2000c600 is same with the state(5) to be set 00:13:55.985 [2024-11-26 04:09:57.713159] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d0070 (9): Bad file descriptor 00:13:55.985 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:55.985 04:09:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.985 04:09:57 -- target/delete_subsystem.sh@34 -- # delay=0 00:13:55.985 04:09:57 -- target/delete_subsystem.sh@35 -- # kill -0 82596 00:13:55.985 04:09:57 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:55.985 Initializing NVMe Controllers 00:13:55.985 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:55.985 Controller IO queue size 128, less than required. 00:13:55.985 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:55.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:55.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:55.985 Initialization complete. Launching workers. 00:13:55.985 ======================================================== 00:13:55.985 Latency(us) 00:13:55.985 Device Information : IOPS MiB/s Average min max 00:13:55.985 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.06 0.08 888499.05 458.43 1014298.45 00:13:55.985 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.65 0.08 980427.23 322.60 2002810.26 00:13:55.985 ======================================================== 00:13:55.985 Total : 335.71 0.16 933037.37 322.60 2002810.26 00:13:55.985 00:13:56.553 04:09:58 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:56.553 04:09:58 -- target/delete_subsystem.sh@35 -- # kill -0 82596 00:13:56.553 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (82596) - No such process 00:13:56.553 04:09:58 -- target/delete_subsystem.sh@45 -- # NOT wait 82596 00:13:56.553 04:09:58 -- common/autotest_common.sh@650 -- # local es=0 00:13:56.553 04:09:58 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 82596 00:13:56.553 04:09:58 -- common/autotest_common.sh@638 -- # local arg=wait 00:13:56.553 04:09:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:56.553 04:09:58 -- common/autotest_common.sh@642 -- # type -t wait 00:13:56.553 04:09:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:56.553 04:09:58 -- common/autotest_common.sh@653 -- # wait 82596 00:13:56.553 04:09:58 -- common/autotest_common.sh@653 -- # es=1 00:13:56.553 04:09:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:56.553 04:09:58 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:56.553 04:09:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:56.553 04:09:58 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:56.553 04:09:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.553 04:09:58 -- common/autotest_common.sh@10 -- # set +x 00:13:56.553 04:09:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.553 04:09:58 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:56.553 04:09:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.553 04:09:58 -- common/autotest_common.sh@10 -- # set +x 00:13:56.553 [2024-11-26 04:09:58.240667] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.553 04:09:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.553 04:09:58 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.553 04:09:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.553 04:09:58 -- common/autotest_common.sh@10 -- # set +x 00:13:56.554 04:09:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.554 04:09:58 -- target/delete_subsystem.sh@54 -- # perf_pid=82647 00:13:56.554 04:09:58 -- target/delete_subsystem.sh@56 -- # delay=0 00:13:56.554 04:09:58 -- target/delete_subsystem.sh@57 -- # kill -0 82647 00:13:56.554 04:09:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:56.554 04:09:58 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:56.812 [2024-11-26 04:09:58.406171] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:57.071 04:09:58 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:57.071 04:09:58 -- target/delete_subsystem.sh@57 -- # kill -0 82647 00:13:57.071 04:09:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:57.638 04:09:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:57.638 04:09:59 -- target/delete_subsystem.sh@57 -- # kill -0 82647 00:13:57.638 04:09:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:58.204 04:09:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:58.204 04:09:59 -- target/delete_subsystem.sh@57 -- # kill -0 82647 00:13:58.204 04:09:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:58.771 04:10:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:58.771 04:10:00 -- target/delete_subsystem.sh@57 -- # kill -0 82647 00:13:58.771 04:10:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:59.030 04:10:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:59.030 04:10:00 -- target/delete_subsystem.sh@57 -- # kill -0 82647 00:13:59.030 04:10:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:59.595 04:10:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:59.595 04:10:01 -- target/delete_subsystem.sh@57 -- # kill -0 82647 00:13:59.595 04:10:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:59.854 Initializing NVMe Controllers 00:13:59.854 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:59.854 Controller IO queue size 128, less than required. 00:13:59.854 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:59.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:59.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:59.854 Initialization complete. Launching workers. 00:13:59.854 ======================================================== 00:13:59.854 Latency(us) 00:13:59.854 Device Information : IOPS MiB/s Average min max 00:13:59.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004213.44 1000155.78 1043685.13 00:13:59.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006881.35 1000400.11 1041649.47 00:13:59.854 ======================================================== 00:13:59.854 Total : 256.00 0.12 1005547.39 1000155.78 1043685.13 00:13:59.854 00:14:00.113 04:10:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:00.113 04:10:01 -- target/delete_subsystem.sh@57 -- # kill -0 82647 00:14:00.113 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (82647) - No such process 00:14:00.113 04:10:01 -- target/delete_subsystem.sh@67 -- # wait 82647 00:14:00.113 04:10:01 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:00.113 04:10:01 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:00.113 04:10:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:00.113 04:10:01 -- nvmf/common.sh@116 -- # sync 00:14:00.113 04:10:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:00.113 04:10:01 -- nvmf/common.sh@119 -- # set +e 00:14:00.113 04:10:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:00.113 04:10:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:00.113 rmmod nvme_tcp 00:14:00.113 rmmod nvme_fabrics 00:14:00.113 rmmod nvme_keyring 00:14:00.372 04:10:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:00.372 04:10:01 -- nvmf/common.sh@123 -- # set -e 00:14:00.372 04:10:01 -- nvmf/common.sh@124 -- # return 0 00:14:00.372 04:10:01 -- nvmf/common.sh@477 -- # '[' -n 82545 ']' 00:14:00.372 04:10:01 -- nvmf/common.sh@478 -- # killprocess 82545 00:14:00.372 04:10:01 -- common/autotest_common.sh@936 -- # '[' -z 82545 ']' 00:14:00.372 04:10:01 -- common/autotest_common.sh@940 -- # kill -0 82545 00:14:00.372 04:10:01 -- common/autotest_common.sh@941 -- # uname 00:14:00.372 04:10:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:00.372 04:10:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82545 00:14:00.372 04:10:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:00.372 04:10:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:00.372 killing process with pid 82545 00:14:00.372 04:10:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82545' 00:14:00.372 04:10:01 -- common/autotest_common.sh@955 -- # kill 82545 00:14:00.372 04:10:01 -- common/autotest_common.sh@960 -- # wait 82545 00:14:00.631 04:10:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:00.631 04:10:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:00.631 04:10:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:00.631 04:10:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:00.631 04:10:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:00.631 04:10:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.631 04:10:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.631 04:10:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.631 04:10:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:00.631 00:14:00.631 real 0m9.464s 00:14:00.631 user 0m29.429s 00:14:00.631 sys 0m1.158s 00:14:00.631 04:10:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:00.631 04:10:02 -- common/autotest_common.sh@10 -- # set +x 00:14:00.631 ************************************ 00:14:00.631 END TEST nvmf_delete_subsystem 00:14:00.631 ************************************ 00:14:00.631 04:10:02 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:14:00.631 04:10:02 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:14:00.631 04:10:02 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:00.631 04:10:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:00.631 04:10:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:00.631 04:10:02 -- common/autotest_common.sh@10 -- # set +x 00:14:00.631 ************************************ 00:14:00.631 START TEST nvmf_host_management 00:14:00.631 ************************************ 00:14:00.631 04:10:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:00.631 * Looking for test storage... 00:14:00.631 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:00.631 04:10:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:00.631 04:10:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:00.631 04:10:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:00.891 04:10:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:00.891 04:10:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:00.891 04:10:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:00.891 04:10:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:00.891 04:10:02 -- scripts/common.sh@335 -- # IFS=.-: 00:14:00.891 04:10:02 -- scripts/common.sh@335 -- # read -ra ver1 00:14:00.891 04:10:02 -- scripts/common.sh@336 -- # IFS=.-: 00:14:00.891 04:10:02 -- scripts/common.sh@336 -- # read -ra ver2 00:14:00.891 04:10:02 -- scripts/common.sh@337 -- # local 'op=<' 00:14:00.891 04:10:02 -- scripts/common.sh@339 -- # ver1_l=2 00:14:00.891 04:10:02 -- scripts/common.sh@340 -- # ver2_l=1 00:14:00.891 04:10:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:00.891 04:10:02 -- scripts/common.sh@343 -- # case "$op" in 00:14:00.891 04:10:02 -- scripts/common.sh@344 -- # : 1 00:14:00.891 04:10:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:00.891 04:10:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:00.891 04:10:02 -- scripts/common.sh@364 -- # decimal 1 00:14:00.891 04:10:02 -- scripts/common.sh@352 -- # local d=1 00:14:00.891 04:10:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:00.891 04:10:02 -- scripts/common.sh@354 -- # echo 1 00:14:00.891 04:10:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:00.891 04:10:02 -- scripts/common.sh@365 -- # decimal 2 00:14:00.891 04:10:02 -- scripts/common.sh@352 -- # local d=2 00:14:00.891 04:10:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:00.891 04:10:02 -- scripts/common.sh@354 -- # echo 2 00:14:00.891 04:10:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:00.891 04:10:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:00.891 04:10:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:00.891 04:10:02 -- scripts/common.sh@367 -- # return 0 00:14:00.891 04:10:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:00.891 04:10:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:00.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.891 --rc genhtml_branch_coverage=1 00:14:00.891 --rc genhtml_function_coverage=1 00:14:00.891 --rc genhtml_legend=1 00:14:00.891 --rc geninfo_all_blocks=1 00:14:00.891 --rc geninfo_unexecuted_blocks=1 00:14:00.891 00:14:00.891 ' 00:14:00.891 04:10:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:00.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.891 --rc genhtml_branch_coverage=1 00:14:00.891 --rc genhtml_function_coverage=1 00:14:00.891 --rc genhtml_legend=1 00:14:00.891 --rc geninfo_all_blocks=1 00:14:00.891 --rc geninfo_unexecuted_blocks=1 00:14:00.891 00:14:00.891 ' 00:14:00.891 04:10:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:00.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.891 --rc genhtml_branch_coverage=1 00:14:00.891 --rc genhtml_function_coverage=1 00:14:00.891 --rc genhtml_legend=1 00:14:00.891 --rc geninfo_all_blocks=1 00:14:00.891 --rc geninfo_unexecuted_blocks=1 00:14:00.891 00:14:00.891 ' 00:14:00.891 04:10:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:00.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.891 --rc genhtml_branch_coverage=1 00:14:00.891 --rc genhtml_function_coverage=1 00:14:00.891 --rc genhtml_legend=1 00:14:00.891 --rc geninfo_all_blocks=1 00:14:00.891 --rc geninfo_unexecuted_blocks=1 00:14:00.891 00:14:00.891 ' 00:14:00.891 04:10:02 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:00.891 04:10:02 -- nvmf/common.sh@7 -- # uname -s 00:14:00.891 04:10:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.891 04:10:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.891 04:10:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.891 04:10:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.891 04:10:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.891 04:10:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.891 04:10:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.891 04:10:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.891 04:10:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.891 04:10:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.891 04:10:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:14:00.891 04:10:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:14:00.891 04:10:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.891 04:10:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.891 04:10:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:00.891 04:10:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:00.891 04:10:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.891 04:10:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.891 04:10:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.891 04:10:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.891 04:10:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.891 04:10:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.891 04:10:02 -- paths/export.sh@5 -- # export PATH 00:14:00.891 04:10:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.891 04:10:02 -- nvmf/common.sh@46 -- # : 0 00:14:00.891 04:10:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:00.891 04:10:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:00.891 04:10:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:00.891 04:10:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.891 04:10:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.891 04:10:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:00.891 04:10:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:00.891 04:10:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:00.891 04:10:02 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:00.891 04:10:02 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:00.891 04:10:02 -- target/host_management.sh@104 -- # nvmftestinit 00:14:00.891 04:10:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:00.891 04:10:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.892 04:10:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:00.892 04:10:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:00.892 04:10:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:00.892 04:10:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.892 04:10:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.892 04:10:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.892 04:10:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:00.892 04:10:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:00.892 04:10:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:00.892 04:10:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:00.892 04:10:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:00.892 04:10:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:00.892 04:10:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.892 04:10:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.892 04:10:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:00.892 04:10:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:00.892 04:10:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:00.892 04:10:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:00.892 04:10:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:00.892 04:10:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.892 04:10:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:00.892 04:10:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:00.892 04:10:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:00.892 04:10:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:00.892 04:10:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:00.892 04:10:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:00.892 Cannot find device "nvmf_tgt_br" 00:14:00.892 04:10:02 -- nvmf/common.sh@154 -- # true 00:14:00.892 04:10:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:00.892 Cannot find device "nvmf_tgt_br2" 00:14:00.892 04:10:02 -- nvmf/common.sh@155 -- # true 00:14:00.892 04:10:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:00.892 04:10:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:00.892 Cannot find device "nvmf_tgt_br" 00:14:00.892 04:10:02 -- nvmf/common.sh@157 -- # true 00:14:00.892 04:10:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:00.892 Cannot find device "nvmf_tgt_br2" 00:14:00.892 04:10:02 -- nvmf/common.sh@158 -- # true 00:14:00.892 04:10:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:00.892 04:10:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:00.892 04:10:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:00.892 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.892 04:10:02 -- nvmf/common.sh@161 -- # true 00:14:00.892 04:10:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:00.892 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.892 04:10:02 -- nvmf/common.sh@162 -- # true 00:14:00.892 04:10:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:00.892 04:10:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:00.892 04:10:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:00.892 04:10:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:01.151 04:10:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:01.151 04:10:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:01.151 04:10:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:01.151 04:10:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:01.151 04:10:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:01.151 04:10:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:01.152 04:10:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:01.152 04:10:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:01.152 04:10:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:01.152 04:10:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:01.152 04:10:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:01.152 04:10:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:01.152 04:10:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:01.152 04:10:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:01.152 04:10:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:01.152 04:10:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:01.152 04:10:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:01.152 04:10:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:01.152 04:10:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:01.152 04:10:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:01.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:01.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:14:01.152 00:14:01.152 --- 10.0.0.2 ping statistics --- 00:14:01.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.152 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:14:01.152 04:10:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:01.152 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:01.152 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:14:01.152 00:14:01.152 --- 10.0.0.3 ping statistics --- 00:14:01.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.152 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:14:01.152 04:10:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:01.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:01.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:01.152 00:14:01.152 --- 10.0.0.1 ping statistics --- 00:14:01.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.152 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:01.152 04:10:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:01.152 04:10:02 -- nvmf/common.sh@421 -- # return 0 00:14:01.152 04:10:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:01.152 04:10:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:01.152 04:10:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:01.152 04:10:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:01.152 04:10:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:01.152 04:10:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:01.152 04:10:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:01.152 04:10:02 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:14:01.152 04:10:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:01.152 04:10:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:01.152 04:10:02 -- common/autotest_common.sh@10 -- # set +x 00:14:01.152 ************************************ 00:14:01.152 START TEST nvmf_host_management 00:14:01.152 ************************************ 00:14:01.152 04:10:02 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:14:01.152 04:10:02 -- target/host_management.sh@69 -- # starttarget 00:14:01.152 04:10:02 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:01.152 04:10:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:01.152 04:10:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:01.152 04:10:02 -- common/autotest_common.sh@10 -- # set +x 00:14:01.152 04:10:02 -- nvmf/common.sh@469 -- # nvmfpid=82883 00:14:01.152 04:10:02 -- nvmf/common.sh@470 -- # waitforlisten 82883 00:14:01.152 04:10:02 -- common/autotest_common.sh@829 -- # '[' -z 82883 ']' 00:14:01.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.152 04:10:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.152 04:10:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:01.152 04:10:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.152 04:10:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.152 04:10:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.152 04:10:02 -- common/autotest_common.sh@10 -- # set +x 00:14:01.411 [2024-11-26 04:10:02.927702] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:01.411 [2024-11-26 04:10:02.927778] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.411 [2024-11-26 04:10:03.055220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:01.411 [2024-11-26 04:10:03.113444] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:01.411 [2024-11-26 04:10:03.113577] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.411 [2024-11-26 04:10:03.113590] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.411 [2024-11-26 04:10:03.113598] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.411 [2024-11-26 04:10:03.113752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.411 [2024-11-26 04:10:03.114665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:01.411 [2024-11-26 04:10:03.114827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:01.411 [2024-11-26 04:10:03.114836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.348 04:10:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.348 04:10:03 -- common/autotest_common.sh@862 -- # return 0 00:14:02.348 04:10:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:02.348 04:10:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:02.348 04:10:03 -- common/autotest_common.sh@10 -- # set +x 00:14:02.348 04:10:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.348 04:10:03 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:02.348 04:10:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.348 04:10:03 -- common/autotest_common.sh@10 -- # set +x 00:14:02.348 [2024-11-26 04:10:03.942831] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.348 04:10:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.348 04:10:03 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:02.348 04:10:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:02.348 04:10:03 -- common/autotest_common.sh@10 -- # set +x 00:14:02.348 04:10:03 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:02.348 04:10:03 -- target/host_management.sh@23 -- # cat 00:14:02.348 04:10:03 -- target/host_management.sh@30 -- # rpc_cmd 00:14:02.348 04:10:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.348 04:10:03 -- common/autotest_common.sh@10 -- # set +x 00:14:02.348 Malloc0 00:14:02.348 [2024-11-26 04:10:04.030498] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.348 04:10:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.348 04:10:04 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:02.348 04:10:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:02.348 04:10:04 -- common/autotest_common.sh@10 -- # set +x 00:14:02.348 04:10:04 -- target/host_management.sh@73 -- # perfpid=82955 00:14:02.348 04:10:04 -- target/host_management.sh@74 -- # waitforlisten 82955 /var/tmp/bdevperf.sock 00:14:02.348 04:10:04 -- common/autotest_common.sh@829 -- # '[' -z 82955 ']' 00:14:02.348 04:10:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:02.348 04:10:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:02.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:02.348 04:10:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:02.348 04:10:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:02.348 04:10:04 -- common/autotest_common.sh@10 -- # set +x 00:14:02.348 04:10:04 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:02.348 04:10:04 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:02.349 04:10:04 -- nvmf/common.sh@520 -- # config=() 00:14:02.349 04:10:04 -- nvmf/common.sh@520 -- # local subsystem config 00:14:02.349 04:10:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:02.349 04:10:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:02.349 { 00:14:02.349 "params": { 00:14:02.349 "name": "Nvme$subsystem", 00:14:02.349 "trtype": "$TEST_TRANSPORT", 00:14:02.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:02.349 "adrfam": "ipv4", 00:14:02.349 "trsvcid": "$NVMF_PORT", 00:14:02.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:02.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:02.349 "hdgst": ${hdgst:-false}, 00:14:02.349 "ddgst": ${ddgst:-false} 00:14:02.349 }, 00:14:02.349 "method": "bdev_nvme_attach_controller" 00:14:02.349 } 00:14:02.349 EOF 00:14:02.349 )") 00:14:02.349 04:10:04 -- nvmf/common.sh@542 -- # cat 00:14:02.349 04:10:04 -- nvmf/common.sh@544 -- # jq . 00:14:02.349 04:10:04 -- nvmf/common.sh@545 -- # IFS=, 00:14:02.349 04:10:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:02.349 "params": { 00:14:02.349 "name": "Nvme0", 00:14:02.349 "trtype": "tcp", 00:14:02.349 "traddr": "10.0.0.2", 00:14:02.349 "adrfam": "ipv4", 00:14:02.349 "trsvcid": "4420", 00:14:02.349 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:02.349 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:02.349 "hdgst": false, 00:14:02.349 "ddgst": false 00:14:02.349 }, 00:14:02.349 "method": "bdev_nvme_attach_controller" 00:14:02.349 }' 00:14:02.608 [2024-11-26 04:10:04.136867] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:02.608 [2024-11-26 04:10:04.136949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82955 ] 00:14:02.608 [2024-11-26 04:10:04.280157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.608 [2024-11-26 04:10:04.363902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.868 Running I/O for 10 seconds... 00:14:03.437 04:10:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:03.437 04:10:05 -- common/autotest_common.sh@862 -- # return 0 00:14:03.437 04:10:05 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:03.437 04:10:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.437 04:10:05 -- common/autotest_common.sh@10 -- # set +x 00:14:03.437 04:10:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.437 04:10:05 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:03.437 04:10:05 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:03.437 04:10:05 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:03.437 04:10:05 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:03.437 04:10:05 -- target/host_management.sh@52 -- # local ret=1 00:14:03.437 04:10:05 -- target/host_management.sh@53 -- # local i 00:14:03.437 04:10:05 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:03.437 04:10:05 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:03.437 04:10:05 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:03.437 04:10:05 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:03.437 04:10:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.437 04:10:05 -- common/autotest_common.sh@10 -- # set +x 00:14:03.698 04:10:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.698 04:10:05 -- target/host_management.sh@55 -- # read_io_count=2365 00:14:03.698 04:10:05 -- target/host_management.sh@58 -- # '[' 2365 -ge 100 ']' 00:14:03.698 04:10:05 -- target/host_management.sh@59 -- # ret=0 00:14:03.698 04:10:05 -- target/host_management.sh@60 -- # break 00:14:03.698 04:10:05 -- target/host_management.sh@64 -- # return 0 00:14:03.698 04:10:05 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:03.698 04:10:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.698 04:10:05 -- common/autotest_common.sh@10 -- # set +x 00:14:03.698 [2024-11-26 04:10:05.232426] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1035e70 is same with the state(5) to be set 00:14:03.698 [2024-11-26 04:10:05.232485] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1035e70 is same with the state(5) to be set 00:14:03.698 [2024-11-26 04:10:05.232494] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1035e70 is same with the state(5) to be set 00:14:03.698 [2024-11-26 04:10:05.232503] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1035e70 is same with the state(5) to be set 00:14:03.698 [2024-11-26 04:10:05.232510] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1035e70 is same with the state(5) to be set 00:14:03.698 [2024-11-26 04:10:05.232517] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1035e70 is same with the state(5) to be set 00:14:03.698 [2024-11-26 04:10:05.232524] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1035e70 is same with the state(5) to be set 00:14:03.698 [2024-11-26 04:10:05.232531] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1035e70 is same with the state(5) to be set 00:14:03.698 [2024-11-26 04:10:05.232540] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1035e70 is same with the state(5) to be set 00:14:03.698 [2024-11-26 04:10:05.232546] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1035e70 is same with the state(5) to be set 00:14:03.698 [2024-11-26 04:10:05.232553] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1035e70 is same with the state(5) to be set 00:14:03.698 [2024-11-26 04:10:05.232560] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1035e70 is same with the state(5) to be set 00:14:03.698 [2024-11-26 04:10:05.232567] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1035e70 is same with the state(5) to be set 00:14:03.698 [2024-11-26 04:10:05.232574] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1035e70 is same with the state(5) to be set 00:14:03.698 [2024-11-26 04:10:05.232581] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1035e70 is same with the state(5) to be set 00:14:03.698 [2024-11-26 04:10:05.232587] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1035e70 is same with the state(5) to be set 00:14:03.698 [2024-11-26 04:10:05.232594] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1035e70 is same with the state(5) to be set 00:14:03.698 [2024-11-26 04:10:05.232600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1035e70 is same with the state(5) to be set 00:14:03.698 [2024-11-26 04:10:05.235559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.698 [2024-11-26 04:10:05.235610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.698 [2024-11-26 04:10:05.235655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.698 [2024-11-26 04:10:05.235672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.698 [2024-11-26 04:10:05.235685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.698 [2024-11-26 04:10:05.235693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.698 [2024-11-26 04:10:05.235703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.698 [2024-11-26 04:10:05.235764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.698 [2024-11-26 04:10:05.235776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.698 [2024-11-26 04:10:05.235794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.698 [2024-11-26 04:10:05.235804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.698 [2024-11-26 04:10:05.235812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.698 [2024-11-26 04:10:05.235822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.698 [2024-11-26 04:10:05.235830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.698 [2024-11-26 04:10:05.235840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.698 [2024-11-26 04:10:05.235848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.698 [2024-11-26 04:10:05.235857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.698 [2024-11-26 04:10:05.235865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.698 [2024-11-26 04:10:05.235874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.698 [2024-11-26 04:10:05.235882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.698 [2024-11-26 04:10:05.235892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.698 [2024-11-26 04:10:05.235899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.698 [2024-11-26 04:10:05.235908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.698 [2024-11-26 04:10:05.235927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.698 [2024-11-26 04:10:05.235936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.698 [2024-11-26 04:10:05.235944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.698 [2024-11-26 04:10:05.235953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.698 [2024-11-26 04:10:05.235960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.698 [2024-11-26 04:10:05.235969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.698 [2024-11-26 04:10:05.235977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.698 [2024-11-26 04:10:05.235986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.698 [2024-11-26 04:10:05.235996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.698 [2024-11-26 04:10:05.236005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.698 [2024-11-26 04:10:05.236014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.698 [2024-11-26 04:10:05.236024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.698 [2024-11-26 04:10:05.236031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.698 [2024-11-26 04:10:05.236041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.698 [2024-11-26 04:10:05.236048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.699 [2024-11-26 04:10:05.236800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.699 [2024-11-26 04:10:05.236810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.700 [2024-11-26 04:10:05.236818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.700 [2024-11-26 04:10:05.236828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.700 [2024-11-26 04:10:05.236836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.700 [2024-11-26 04:10:05.236846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.700 [2024-11-26 04:10:05.236853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.700 [2024-11-26 04:10:05.236863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.700 [2024-11-26 04:10:05.236870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.700 [2024-11-26 04:10:05.236879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.700 [2024-11-26 04:10:05.236887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.700 [2024-11-26 04:10:05.236896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.700 [2024-11-26 04:10:05.236903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.700 [2024-11-26 04:10:05.237025] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x78bdc0 was disconnected and freed. reset controller. 00:14:03.700 task offset: 59136 on job bdev=Nvme0n1 fails 00:14:03.700 00:14:03.700 Latency(us) 00:14:03.700 [2024-11-26T04:10:05.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.700 [2024-11-26T04:10:05.468Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:03.700 [2024-11-26T04:10:05.468Z] Job: Nvme0n1 ended in about 0.66 seconds with error 00:14:03.700 Verification LBA range: start 0x0 length 0x400 00:14:03.700 Nvme0n1 : 0.66 3831.59 239.47 96.47 0.00 16025.75 1869.27 24665.37 00:14:03.700 [2024-11-26T04:10:05.468Z] =================================================================================================================== 00:14:03.700 [2024-11-26T04:10:05.468Z] Total : 3831.59 239.47 96.47 0.00 16025.75 1869.27 24665.37 00:14:03.700 [2024-11-26 04:10:05.238215] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:03.700 [2024-11-26 04:10:05.240114] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:03.700 [2024-11-26 04:10:05.240148] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e7a70 (9): Bad file descriptor 00:14:03.700 [2024-11-26 04:10:05.241397] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:14:03.700 [2024-11-26 04:10:05.241490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:03.700 [2024-11-26 04:10:05.241512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.700 [2024-11-26 04:10:05.241528] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:14:03.700 [2024-11-26 04:10:05.241537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:14:03.700 [2024-11-26 04:10:05.241546] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:14:03.700 [2024-11-26 04:10:05.241554] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6e7a70 00:14:03.700 [2024-11-26 04:10:05.241594] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e7a70 (9): Bad file descriptor 00:14:03.700 [2024-11-26 04:10:05.241612] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:14:03.700 [2024-11-26 04:10:05.241621] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:14:03.700 [2024-11-26 04:10:05.241631] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:14:03.700 [2024-11-26 04:10:05.241645] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:03.700 04:10:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.700 04:10:05 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:03.700 04:10:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.700 04:10:05 -- common/autotest_common.sh@10 -- # set +x 00:14:03.700 04:10:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.700 04:10:05 -- target/host_management.sh@87 -- # sleep 1 00:14:04.638 04:10:06 -- target/host_management.sh@91 -- # kill -9 82955 00:14:04.638 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (82955) - No such process 00:14:04.638 04:10:06 -- target/host_management.sh@91 -- # true 00:14:04.638 04:10:06 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:04.638 04:10:06 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:04.638 04:10:06 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:04.638 04:10:06 -- nvmf/common.sh@520 -- # config=() 00:14:04.638 04:10:06 -- nvmf/common.sh@520 -- # local subsystem config 00:14:04.638 04:10:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:04.638 04:10:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:04.638 { 00:14:04.638 "params": { 00:14:04.638 "name": "Nvme$subsystem", 00:14:04.638 "trtype": "$TEST_TRANSPORT", 00:14:04.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:04.638 "adrfam": "ipv4", 00:14:04.638 "trsvcid": "$NVMF_PORT", 00:14:04.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:04.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:04.638 "hdgst": ${hdgst:-false}, 00:14:04.638 "ddgst": ${ddgst:-false} 00:14:04.638 }, 00:14:04.638 "method": "bdev_nvme_attach_controller" 00:14:04.638 } 00:14:04.638 EOF 00:14:04.638 )") 00:14:04.638 04:10:06 -- nvmf/common.sh@542 -- # cat 00:14:04.638 04:10:06 -- nvmf/common.sh@544 -- # jq . 00:14:04.638 04:10:06 -- nvmf/common.sh@545 -- # IFS=, 00:14:04.638 04:10:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:04.638 "params": { 00:14:04.638 "name": "Nvme0", 00:14:04.638 "trtype": "tcp", 00:14:04.638 "traddr": "10.0.0.2", 00:14:04.638 "adrfam": "ipv4", 00:14:04.638 "trsvcid": "4420", 00:14:04.638 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:04.638 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:04.638 "hdgst": false, 00:14:04.638 "ddgst": false 00:14:04.638 }, 00:14:04.638 "method": "bdev_nvme_attach_controller" 00:14:04.638 }' 00:14:04.638 [2024-11-26 04:10:06.321225] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:04.638 [2024-11-26 04:10:06.321323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83005 ] 00:14:04.898 [2024-11-26 04:10:06.460476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.898 [2024-11-26 04:10:06.524704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.157 Running I/O for 1 seconds... 00:14:06.094 00:14:06.094 Latency(us) 00:14:06.094 [2024-11-26T04:10:07.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.094 [2024-11-26T04:10:07.862Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:06.094 Verification LBA range: start 0x0 length 0x400 00:14:06.094 Nvme0n1 : 1.01 4028.17 251.76 0.00 0.00 15613.45 1392.64 21805.61 00:14:06.094 [2024-11-26T04:10:07.862Z] =================================================================================================================== 00:14:06.094 [2024-11-26T04:10:07.862Z] Total : 4028.17 251.76 0.00 0.00 15613.45 1392.64 21805.61 00:14:06.353 04:10:08 -- target/host_management.sh@101 -- # stoptarget 00:14:06.353 04:10:08 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:06.353 04:10:08 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:14:06.353 04:10:08 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:06.353 04:10:08 -- target/host_management.sh@40 -- # nvmftestfini 00:14:06.353 04:10:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:06.353 04:10:08 -- nvmf/common.sh@116 -- # sync 00:14:06.353 04:10:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:06.353 04:10:08 -- nvmf/common.sh@119 -- # set +e 00:14:06.353 04:10:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:06.353 04:10:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:06.353 rmmod nvme_tcp 00:14:06.353 rmmod nvme_fabrics 00:14:06.612 rmmod nvme_keyring 00:14:06.612 04:10:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:06.612 04:10:08 -- nvmf/common.sh@123 -- # set -e 00:14:06.612 04:10:08 -- nvmf/common.sh@124 -- # return 0 00:14:06.612 04:10:08 -- nvmf/common.sh@477 -- # '[' -n 82883 ']' 00:14:06.612 04:10:08 -- nvmf/common.sh@478 -- # killprocess 82883 00:14:06.612 04:10:08 -- common/autotest_common.sh@936 -- # '[' -z 82883 ']' 00:14:06.612 04:10:08 -- common/autotest_common.sh@940 -- # kill -0 82883 00:14:06.612 04:10:08 -- common/autotest_common.sh@941 -- # uname 00:14:06.612 04:10:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:06.612 04:10:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82883 00:14:06.612 04:10:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:06.612 04:10:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:06.612 killing process with pid 82883 00:14:06.612 04:10:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82883' 00:14:06.612 04:10:08 -- common/autotest_common.sh@955 -- # kill 82883 00:14:06.612 04:10:08 -- common/autotest_common.sh@960 -- # wait 82883 00:14:06.870 [2024-11-26 04:10:08.375477] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:06.870 04:10:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:06.870 04:10:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:06.870 04:10:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:06.870 04:10:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:06.870 04:10:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:06.870 04:10:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.870 04:10:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.870 04:10:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.870 04:10:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:06.870 00:14:06.870 real 0m5.605s 00:14:06.870 user 0m23.649s 00:14:06.870 sys 0m1.332s 00:14:06.870 04:10:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:06.870 ************************************ 00:14:06.870 END TEST nvmf_host_management 00:14:06.870 ************************************ 00:14:06.870 04:10:08 -- common/autotest_common.sh@10 -- # set +x 00:14:06.870 04:10:08 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:14:06.870 00:14:06.870 real 0m6.252s 00:14:06.870 user 0m23.854s 00:14:06.870 sys 0m1.599s 00:14:06.870 04:10:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:06.870 ************************************ 00:14:06.870 END TEST nvmf_host_management 00:14:06.870 ************************************ 00:14:06.870 04:10:08 -- common/autotest_common.sh@10 -- # set +x 00:14:06.870 04:10:08 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:06.870 04:10:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:06.870 04:10:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:06.870 04:10:08 -- common/autotest_common.sh@10 -- # set +x 00:14:06.870 ************************************ 00:14:06.870 START TEST nvmf_lvol 00:14:06.870 ************************************ 00:14:06.870 04:10:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:07.130 * Looking for test storage... 00:14:07.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:07.130 04:10:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:07.130 04:10:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:07.130 04:10:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:07.130 04:10:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:07.130 04:10:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:07.130 04:10:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:07.130 04:10:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:07.130 04:10:08 -- scripts/common.sh@335 -- # IFS=.-: 00:14:07.130 04:10:08 -- scripts/common.sh@335 -- # read -ra ver1 00:14:07.130 04:10:08 -- scripts/common.sh@336 -- # IFS=.-: 00:14:07.130 04:10:08 -- scripts/common.sh@336 -- # read -ra ver2 00:14:07.130 04:10:08 -- scripts/common.sh@337 -- # local 'op=<' 00:14:07.130 04:10:08 -- scripts/common.sh@339 -- # ver1_l=2 00:14:07.130 04:10:08 -- scripts/common.sh@340 -- # ver2_l=1 00:14:07.130 04:10:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:07.130 04:10:08 -- scripts/common.sh@343 -- # case "$op" in 00:14:07.130 04:10:08 -- scripts/common.sh@344 -- # : 1 00:14:07.130 04:10:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:07.130 04:10:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:07.130 04:10:08 -- scripts/common.sh@364 -- # decimal 1 00:14:07.130 04:10:08 -- scripts/common.sh@352 -- # local d=1 00:14:07.130 04:10:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:07.130 04:10:08 -- scripts/common.sh@354 -- # echo 1 00:14:07.130 04:10:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:07.130 04:10:08 -- scripts/common.sh@365 -- # decimal 2 00:14:07.130 04:10:08 -- scripts/common.sh@352 -- # local d=2 00:14:07.130 04:10:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:07.130 04:10:08 -- scripts/common.sh@354 -- # echo 2 00:14:07.130 04:10:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:07.130 04:10:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:07.130 04:10:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:07.130 04:10:08 -- scripts/common.sh@367 -- # return 0 00:14:07.130 04:10:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:07.130 04:10:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:07.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.130 --rc genhtml_branch_coverage=1 00:14:07.130 --rc genhtml_function_coverage=1 00:14:07.130 --rc genhtml_legend=1 00:14:07.130 --rc geninfo_all_blocks=1 00:14:07.130 --rc geninfo_unexecuted_blocks=1 00:14:07.130 00:14:07.130 ' 00:14:07.130 04:10:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:07.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.130 --rc genhtml_branch_coverage=1 00:14:07.130 --rc genhtml_function_coverage=1 00:14:07.130 --rc genhtml_legend=1 00:14:07.130 --rc geninfo_all_blocks=1 00:14:07.130 --rc geninfo_unexecuted_blocks=1 00:14:07.130 00:14:07.130 ' 00:14:07.130 04:10:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:07.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.130 --rc genhtml_branch_coverage=1 00:14:07.130 --rc genhtml_function_coverage=1 00:14:07.130 --rc genhtml_legend=1 00:14:07.130 --rc geninfo_all_blocks=1 00:14:07.130 --rc geninfo_unexecuted_blocks=1 00:14:07.130 00:14:07.130 ' 00:14:07.130 04:10:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:07.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.130 --rc genhtml_branch_coverage=1 00:14:07.130 --rc genhtml_function_coverage=1 00:14:07.130 --rc genhtml_legend=1 00:14:07.130 --rc geninfo_all_blocks=1 00:14:07.130 --rc geninfo_unexecuted_blocks=1 00:14:07.130 00:14:07.130 ' 00:14:07.130 04:10:08 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:07.130 04:10:08 -- nvmf/common.sh@7 -- # uname -s 00:14:07.130 04:10:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.130 04:10:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.130 04:10:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.130 04:10:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.130 04:10:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.130 04:10:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.130 04:10:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.130 04:10:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.130 04:10:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.130 04:10:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.130 04:10:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:14:07.130 04:10:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:14:07.130 04:10:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.130 04:10:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.130 04:10:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:07.130 04:10:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:07.130 04:10:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.130 04:10:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.130 04:10:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.130 04:10:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.130 04:10:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.130 04:10:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.130 04:10:08 -- paths/export.sh@5 -- # export PATH 00:14:07.131 04:10:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.131 04:10:08 -- nvmf/common.sh@46 -- # : 0 00:14:07.131 04:10:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:07.131 04:10:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:07.131 04:10:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:07.131 04:10:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.131 04:10:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.131 04:10:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:07.131 04:10:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:07.131 04:10:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:07.131 04:10:08 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:07.131 04:10:08 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:07.131 04:10:08 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:07.131 04:10:08 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:07.131 04:10:08 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:07.131 04:10:08 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:07.131 04:10:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:07.131 04:10:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.131 04:10:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:07.131 04:10:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:07.131 04:10:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:07.131 04:10:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.131 04:10:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.131 04:10:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.131 04:10:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:07.131 04:10:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:07.131 04:10:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:07.131 04:10:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:07.131 04:10:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:07.131 04:10:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:07.131 04:10:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.131 04:10:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.131 04:10:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:07.131 04:10:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:07.131 04:10:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:07.131 04:10:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:07.131 04:10:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:07.131 04:10:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.131 04:10:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:07.131 04:10:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:07.131 04:10:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:07.131 04:10:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:07.131 04:10:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:07.131 04:10:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:07.131 Cannot find device "nvmf_tgt_br" 00:14:07.131 04:10:08 -- nvmf/common.sh@154 -- # true 00:14:07.131 04:10:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:07.131 Cannot find device "nvmf_tgt_br2" 00:14:07.131 04:10:08 -- nvmf/common.sh@155 -- # true 00:14:07.131 04:10:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:07.131 04:10:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:07.131 Cannot find device "nvmf_tgt_br" 00:14:07.131 04:10:08 -- nvmf/common.sh@157 -- # true 00:14:07.131 04:10:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:07.131 Cannot find device "nvmf_tgt_br2" 00:14:07.131 04:10:08 -- nvmf/common.sh@158 -- # true 00:14:07.131 04:10:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:07.131 04:10:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:07.131 04:10:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:07.131 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:07.131 04:10:08 -- nvmf/common.sh@161 -- # true 00:14:07.131 04:10:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:07.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:07.390 04:10:08 -- nvmf/common.sh@162 -- # true 00:14:07.390 04:10:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:07.390 04:10:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:07.390 04:10:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:07.390 04:10:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:07.391 04:10:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:07.391 04:10:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:07.391 04:10:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:07.391 04:10:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:07.391 04:10:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:07.391 04:10:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:07.391 04:10:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:07.391 04:10:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:07.391 04:10:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:07.391 04:10:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:07.391 04:10:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:07.391 04:10:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:07.391 04:10:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:07.391 04:10:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:07.391 04:10:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:07.391 04:10:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:07.391 04:10:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:07.391 04:10:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:07.391 04:10:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:07.391 04:10:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:07.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:14:07.391 00:14:07.391 --- 10.0.0.2 ping statistics --- 00:14:07.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.391 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:14:07.391 04:10:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:07.391 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:07.391 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:14:07.391 00:14:07.391 --- 10.0.0.3 ping statistics --- 00:14:07.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.391 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:07.391 04:10:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:07.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:14:07.391 00:14:07.391 --- 10.0.0.1 ping statistics --- 00:14:07.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.391 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:07.391 04:10:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.391 04:10:09 -- nvmf/common.sh@421 -- # return 0 00:14:07.391 04:10:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:07.391 04:10:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.391 04:10:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:07.391 04:10:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:07.391 04:10:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.391 04:10:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:07.391 04:10:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:07.391 04:10:09 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:07.391 04:10:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:07.391 04:10:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:07.391 04:10:09 -- common/autotest_common.sh@10 -- # set +x 00:14:07.391 04:10:09 -- nvmf/common.sh@469 -- # nvmfpid=83251 00:14:07.391 04:10:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:07.391 04:10:09 -- nvmf/common.sh@470 -- # waitforlisten 83251 00:14:07.391 04:10:09 -- common/autotest_common.sh@829 -- # '[' -z 83251 ']' 00:14:07.391 04:10:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.391 04:10:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:07.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.391 04:10:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.391 04:10:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:07.391 04:10:09 -- common/autotest_common.sh@10 -- # set +x 00:14:07.650 [2024-11-26 04:10:09.160634] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:07.650 [2024-11-26 04:10:09.160692] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.650 [2024-11-26 04:10:09.298236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:07.650 [2024-11-26 04:10:09.381788] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:07.650 [2024-11-26 04:10:09.381985] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.650 [2024-11-26 04:10:09.382009] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.650 [2024-11-26 04:10:09.382021] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.650 [2024-11-26 04:10:09.382192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.650 [2024-11-26 04:10:09.382326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.650 [2024-11-26 04:10:09.382338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.585 04:10:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.585 04:10:10 -- common/autotest_common.sh@862 -- # return 0 00:14:08.585 04:10:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:08.585 04:10:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:08.585 04:10:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.585 04:10:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.585 04:10:10 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:08.844 [2024-11-26 04:10:10.436928] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.844 04:10:10 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:09.104 04:10:10 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:09.104 04:10:10 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:09.363 04:10:11 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:09.363 04:10:11 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:09.622 04:10:11 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:09.881 04:10:11 -- target/nvmf_lvol.sh@29 -- # lvs=c4c2809b-6586-47e5-ac18-d456d2f4585d 00:14:09.881 04:10:11 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c4c2809b-6586-47e5-ac18-d456d2f4585d lvol 20 00:14:10.139 04:10:11 -- target/nvmf_lvol.sh@32 -- # lvol=978ec652-ac4c-462e-8177-5f1540027c02 00:14:10.139 04:10:11 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:10.398 04:10:12 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 978ec652-ac4c-462e-8177-5f1540027c02 00:14:10.657 04:10:12 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:10.915 [2024-11-26 04:10:12.512332] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.915 04:10:12 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:11.174 04:10:12 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:11.174 04:10:12 -- target/nvmf_lvol.sh@42 -- # perf_pid=83400 00:14:11.174 04:10:12 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:12.110 04:10:13 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 978ec652-ac4c-462e-8177-5f1540027c02 MY_SNAPSHOT 00:14:12.369 04:10:14 -- target/nvmf_lvol.sh@47 -- # snapshot=152eb9a0-eee7-45ac-934f-b6c8ec1f70b7 00:14:12.369 04:10:14 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 978ec652-ac4c-462e-8177-5f1540027c02 30 00:14:12.936 04:10:14 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 152eb9a0-eee7-45ac-934f-b6c8ec1f70b7 MY_CLONE 00:14:13.195 04:10:14 -- target/nvmf_lvol.sh@49 -- # clone=4c1f4014-3ae7-4038-9c90-df7ac5175752 00:14:13.195 04:10:14 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 4c1f4014-3ae7-4038-9c90-df7ac5175752 00:14:13.764 04:10:15 -- target/nvmf_lvol.sh@53 -- # wait 83400 00:14:21.882 Initializing NVMe Controllers 00:14:21.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:21.882 Controller IO queue size 128, less than required. 00:14:21.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:21.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:21.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:21.882 Initialization complete. Launching workers. 00:14:21.882 ======================================================== 00:14:21.882 Latency(us) 00:14:21.882 Device Information : IOPS MiB/s Average min max 00:14:21.882 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7462.19 29.15 17164.82 1559.00 58372.81 00:14:21.882 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7937.59 31.01 16139.45 401.13 130644.59 00:14:21.882 ======================================================== 00:14:21.882 Total : 15399.78 60.16 16636.31 401.13 130644.59 00:14:21.882 00:14:21.882 04:10:23 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:21.882 04:10:23 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 978ec652-ac4c-462e-8177-5f1540027c02 00:14:21.882 04:10:23 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c4c2809b-6586-47e5-ac18-d456d2f4585d 00:14:22.142 04:10:23 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:22.142 04:10:23 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:22.142 04:10:23 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:22.142 04:10:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:22.142 04:10:23 -- nvmf/common.sh@116 -- # sync 00:14:22.142 04:10:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:22.142 04:10:23 -- nvmf/common.sh@119 -- # set +e 00:14:22.142 04:10:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:22.142 04:10:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:22.142 rmmod nvme_tcp 00:14:22.401 rmmod nvme_fabrics 00:14:22.401 rmmod nvme_keyring 00:14:22.401 04:10:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:22.401 04:10:23 -- nvmf/common.sh@123 -- # set -e 00:14:22.401 04:10:23 -- nvmf/common.sh@124 -- # return 0 00:14:22.401 04:10:23 -- nvmf/common.sh@477 -- # '[' -n 83251 ']' 00:14:22.401 04:10:23 -- nvmf/common.sh@478 -- # killprocess 83251 00:14:22.401 04:10:23 -- common/autotest_common.sh@936 -- # '[' -z 83251 ']' 00:14:22.401 04:10:23 -- common/autotest_common.sh@940 -- # kill -0 83251 00:14:22.401 04:10:23 -- common/autotest_common.sh@941 -- # uname 00:14:22.401 04:10:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:22.401 04:10:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83251 00:14:22.401 killing process with pid 83251 00:14:22.401 04:10:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:22.401 04:10:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:22.401 04:10:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83251' 00:14:22.401 04:10:23 -- common/autotest_common.sh@955 -- # kill 83251 00:14:22.401 04:10:23 -- common/autotest_common.sh@960 -- # wait 83251 00:14:22.660 04:10:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:22.660 04:10:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:22.660 04:10:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:22.660 04:10:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:22.660 04:10:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:22.660 04:10:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.660 04:10:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.660 04:10:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.660 04:10:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:22.660 ************************************ 00:14:22.660 END TEST nvmf_lvol 00:14:22.660 ************************************ 00:14:22.660 00:14:22.660 real 0m15.742s 00:14:22.660 user 1m5.774s 00:14:22.660 sys 0m3.798s 00:14:22.660 04:10:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:22.660 04:10:24 -- common/autotest_common.sh@10 -- # set +x 00:14:22.660 04:10:24 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:22.660 04:10:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:22.660 04:10:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:22.660 04:10:24 -- common/autotest_common.sh@10 -- # set +x 00:14:22.660 ************************************ 00:14:22.660 START TEST nvmf_lvs_grow 00:14:22.660 ************************************ 00:14:22.660 04:10:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:22.919 * Looking for test storage... 00:14:22.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:22.919 04:10:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:22.919 04:10:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:22.919 04:10:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:22.919 04:10:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:22.919 04:10:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:22.919 04:10:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:22.919 04:10:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:22.919 04:10:24 -- scripts/common.sh@335 -- # IFS=.-: 00:14:22.919 04:10:24 -- scripts/common.sh@335 -- # read -ra ver1 00:14:22.919 04:10:24 -- scripts/common.sh@336 -- # IFS=.-: 00:14:22.919 04:10:24 -- scripts/common.sh@336 -- # read -ra ver2 00:14:22.919 04:10:24 -- scripts/common.sh@337 -- # local 'op=<' 00:14:22.919 04:10:24 -- scripts/common.sh@339 -- # ver1_l=2 00:14:22.919 04:10:24 -- scripts/common.sh@340 -- # ver2_l=1 00:14:22.919 04:10:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:22.919 04:10:24 -- scripts/common.sh@343 -- # case "$op" in 00:14:22.919 04:10:24 -- scripts/common.sh@344 -- # : 1 00:14:22.919 04:10:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:22.919 04:10:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:22.919 04:10:24 -- scripts/common.sh@364 -- # decimal 1 00:14:22.919 04:10:24 -- scripts/common.sh@352 -- # local d=1 00:14:22.919 04:10:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:22.919 04:10:24 -- scripts/common.sh@354 -- # echo 1 00:14:22.919 04:10:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:22.919 04:10:24 -- scripts/common.sh@365 -- # decimal 2 00:14:22.919 04:10:24 -- scripts/common.sh@352 -- # local d=2 00:14:22.919 04:10:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:22.919 04:10:24 -- scripts/common.sh@354 -- # echo 2 00:14:22.919 04:10:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:22.919 04:10:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:22.920 04:10:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:22.920 04:10:24 -- scripts/common.sh@367 -- # return 0 00:14:22.920 04:10:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:22.920 04:10:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:22.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.920 --rc genhtml_branch_coverage=1 00:14:22.920 --rc genhtml_function_coverage=1 00:14:22.920 --rc genhtml_legend=1 00:14:22.920 --rc geninfo_all_blocks=1 00:14:22.920 --rc geninfo_unexecuted_blocks=1 00:14:22.920 00:14:22.920 ' 00:14:22.920 04:10:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:22.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.920 --rc genhtml_branch_coverage=1 00:14:22.920 --rc genhtml_function_coverage=1 00:14:22.920 --rc genhtml_legend=1 00:14:22.920 --rc geninfo_all_blocks=1 00:14:22.920 --rc geninfo_unexecuted_blocks=1 00:14:22.920 00:14:22.920 ' 00:14:22.920 04:10:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:22.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.920 --rc genhtml_branch_coverage=1 00:14:22.920 --rc genhtml_function_coverage=1 00:14:22.920 --rc genhtml_legend=1 00:14:22.920 --rc geninfo_all_blocks=1 00:14:22.920 --rc geninfo_unexecuted_blocks=1 00:14:22.920 00:14:22.920 ' 00:14:22.920 04:10:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:22.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.920 --rc genhtml_branch_coverage=1 00:14:22.920 --rc genhtml_function_coverage=1 00:14:22.920 --rc genhtml_legend=1 00:14:22.920 --rc geninfo_all_blocks=1 00:14:22.920 --rc geninfo_unexecuted_blocks=1 00:14:22.920 00:14:22.920 ' 00:14:22.920 04:10:24 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:22.920 04:10:24 -- nvmf/common.sh@7 -- # uname -s 00:14:22.920 04:10:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.920 04:10:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.920 04:10:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.920 04:10:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.920 04:10:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.920 04:10:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.920 04:10:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.920 04:10:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.920 04:10:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.920 04:10:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.920 04:10:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:14:22.920 04:10:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:14:22.920 04:10:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.920 04:10:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.920 04:10:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:22.920 04:10:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:22.920 04:10:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.920 04:10:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.920 04:10:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.920 04:10:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.920 04:10:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.920 04:10:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.920 04:10:24 -- paths/export.sh@5 -- # export PATH 00:14:22.920 04:10:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.920 04:10:24 -- nvmf/common.sh@46 -- # : 0 00:14:22.920 04:10:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:22.920 04:10:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:22.920 04:10:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:22.920 04:10:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.920 04:10:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.920 04:10:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:22.920 04:10:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:22.920 04:10:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:22.920 04:10:24 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:22.920 04:10:24 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:22.920 04:10:24 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:22.920 04:10:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:22.920 04:10:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.920 04:10:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:22.920 04:10:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:22.920 04:10:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:22.920 04:10:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.920 04:10:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.920 04:10:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.920 04:10:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:22.920 04:10:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:22.920 04:10:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:22.920 04:10:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:22.920 04:10:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:22.920 04:10:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:22.920 04:10:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:22.920 04:10:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:22.920 04:10:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:22.920 04:10:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:22.920 04:10:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:22.920 04:10:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:22.920 04:10:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:22.920 04:10:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:22.920 04:10:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:22.920 04:10:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:22.920 04:10:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:22.920 04:10:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:22.920 04:10:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:22.920 04:10:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:22.920 Cannot find device "nvmf_tgt_br" 00:14:22.920 04:10:24 -- nvmf/common.sh@154 -- # true 00:14:22.920 04:10:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:22.920 Cannot find device "nvmf_tgt_br2" 00:14:22.920 04:10:24 -- nvmf/common.sh@155 -- # true 00:14:22.920 04:10:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:22.920 04:10:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:22.920 Cannot find device "nvmf_tgt_br" 00:14:22.920 04:10:24 -- nvmf/common.sh@157 -- # true 00:14:22.920 04:10:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:22.920 Cannot find device "nvmf_tgt_br2" 00:14:22.920 04:10:24 -- nvmf/common.sh@158 -- # true 00:14:22.920 04:10:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:23.179 04:10:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:23.179 04:10:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:23.179 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:23.179 04:10:24 -- nvmf/common.sh@161 -- # true 00:14:23.179 04:10:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:23.179 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:23.179 04:10:24 -- nvmf/common.sh@162 -- # true 00:14:23.179 04:10:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:23.179 04:10:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:23.179 04:10:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:23.179 04:10:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:23.179 04:10:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:23.179 04:10:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:23.179 04:10:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:23.179 04:10:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:23.179 04:10:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:23.179 04:10:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:23.179 04:10:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:23.179 04:10:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:23.179 04:10:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:23.179 04:10:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:23.179 04:10:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:23.179 04:10:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:23.179 04:10:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:23.179 04:10:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:23.179 04:10:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:23.179 04:10:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:23.179 04:10:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:23.179 04:10:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:23.179 04:10:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:23.179 04:10:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:23.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:14:23.179 00:14:23.179 --- 10.0.0.2 ping statistics --- 00:14:23.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.179 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:14:23.179 04:10:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:23.179 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:23.179 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:14:23.179 00:14:23.179 --- 10.0.0.3 ping statistics --- 00:14:23.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.179 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:14:23.179 04:10:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:23.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:14:23.179 00:14:23.179 --- 10.0.0.1 ping statistics --- 00:14:23.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.179 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:14:23.179 04:10:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.179 04:10:24 -- nvmf/common.sh@421 -- # return 0 00:14:23.179 04:10:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:23.179 04:10:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.179 04:10:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:23.179 04:10:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:23.179 04:10:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.179 04:10:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:23.179 04:10:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:23.179 04:10:24 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:23.179 04:10:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:23.179 04:10:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:23.179 04:10:24 -- common/autotest_common.sh@10 -- # set +x 00:14:23.179 04:10:24 -- nvmf/common.sh@469 -- # nvmfpid=83770 00:14:23.179 04:10:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:23.179 04:10:24 -- nvmf/common.sh@470 -- # waitforlisten 83770 00:14:23.179 04:10:24 -- common/autotest_common.sh@829 -- # '[' -z 83770 ']' 00:14:23.179 04:10:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.179 04:10:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:23.179 04:10:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.179 04:10:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:23.179 04:10:24 -- common/autotest_common.sh@10 -- # set +x 00:14:23.442 [2024-11-26 04:10:24.964180] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:23.442 [2024-11-26 04:10:24.964362] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.442 [2024-11-26 04:10:25.095649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.442 [2024-11-26 04:10:25.167348] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:23.442 [2024-11-26 04:10:25.167500] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.442 [2024-11-26 04:10:25.167515] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.442 [2024-11-26 04:10:25.167522] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.442 [2024-11-26 04:10:25.167553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.405 04:10:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:24.405 04:10:25 -- common/autotest_common.sh@862 -- # return 0 00:14:24.405 04:10:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:24.405 04:10:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:24.405 04:10:25 -- common/autotest_common.sh@10 -- # set +x 00:14:24.405 04:10:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.405 04:10:26 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:24.664 [2024-11-26 04:10:26.320317] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:24.664 04:10:26 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:24.664 04:10:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:24.664 04:10:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:24.664 04:10:26 -- common/autotest_common.sh@10 -- # set +x 00:14:24.664 ************************************ 00:14:24.664 START TEST lvs_grow_clean 00:14:24.664 ************************************ 00:14:24.664 04:10:26 -- common/autotest_common.sh@1114 -- # lvs_grow 00:14:24.664 04:10:26 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:24.664 04:10:26 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:24.664 04:10:26 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:24.664 04:10:26 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:24.664 04:10:26 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:24.664 04:10:26 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:24.664 04:10:26 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:24.664 04:10:26 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:24.664 04:10:26 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:25.231 04:10:26 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:25.231 04:10:26 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:25.231 04:10:26 -- target/nvmf_lvs_grow.sh@28 -- # lvs=a85d915a-04f4-4785-8b3f-20b71ea227d9 00:14:25.231 04:10:26 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a85d915a-04f4-4785-8b3f-20b71ea227d9 00:14:25.231 04:10:26 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:25.489 04:10:27 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:25.489 04:10:27 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:25.489 04:10:27 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a85d915a-04f4-4785-8b3f-20b71ea227d9 lvol 150 00:14:25.747 04:10:27 -- target/nvmf_lvs_grow.sh@33 -- # lvol=f1bf2e54-9eaf-4a30-ac45-0e3f163206e9 00:14:25.747 04:10:27 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:25.747 04:10:27 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:26.005 [2024-11-26 04:10:27.692699] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:26.005 [2024-11-26 04:10:27.692781] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:26.005 true 00:14:26.005 04:10:27 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a85d915a-04f4-4785-8b3f-20b71ea227d9 00:14:26.005 04:10:27 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:26.263 04:10:27 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:26.263 04:10:27 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:26.521 04:10:28 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f1bf2e54-9eaf-4a30-ac45-0e3f163206e9 00:14:26.780 04:10:28 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:27.039 [2024-11-26 04:10:28.561807] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.039 04:10:28 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:27.039 04:10:28 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:27.039 04:10:28 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83928 00:14:27.039 04:10:28 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:27.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:27.039 04:10:28 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83928 /var/tmp/bdevperf.sock 00:14:27.039 04:10:28 -- common/autotest_common.sh@829 -- # '[' -z 83928 ']' 00:14:27.039 04:10:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:27.039 04:10:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:27.039 04:10:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:27.039 04:10:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:27.039 04:10:28 -- common/autotest_common.sh@10 -- # set +x 00:14:27.298 [2024-11-26 04:10:28.842500] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:27.298 [2024-11-26 04:10:28.843017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83928 ] 00:14:27.298 [2024-11-26 04:10:28.982724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.298 [2024-11-26 04:10:29.039007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.237 04:10:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:28.237 04:10:29 -- common/autotest_common.sh@862 -- # return 0 00:14:28.237 04:10:29 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:28.496 Nvme0n1 00:14:28.496 04:10:30 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:28.755 [ 00:14:28.755 { 00:14:28.755 "aliases": [ 00:14:28.755 "f1bf2e54-9eaf-4a30-ac45-0e3f163206e9" 00:14:28.755 ], 00:14:28.755 "assigned_rate_limits": { 00:14:28.755 "r_mbytes_per_sec": 0, 00:14:28.755 "rw_ios_per_sec": 0, 00:14:28.755 "rw_mbytes_per_sec": 0, 00:14:28.755 "w_mbytes_per_sec": 0 00:14:28.755 }, 00:14:28.755 "block_size": 4096, 00:14:28.755 "claimed": false, 00:14:28.755 "driver_specific": { 00:14:28.755 "mp_policy": "active_passive", 00:14:28.755 "nvme": [ 00:14:28.755 { 00:14:28.755 "ctrlr_data": { 00:14:28.755 "ana_reporting": false, 00:14:28.755 "cntlid": 1, 00:14:28.755 "firmware_revision": "24.01.1", 00:14:28.755 "model_number": "SPDK bdev Controller", 00:14:28.755 "multi_ctrlr": true, 00:14:28.755 "oacs": { 00:14:28.755 "firmware": 0, 00:14:28.755 "format": 0, 00:14:28.755 "ns_manage": 0, 00:14:28.755 "security": 0 00:14:28.755 }, 00:14:28.755 "serial_number": "SPDK0", 00:14:28.755 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:28.755 "vendor_id": "0x8086" 00:14:28.755 }, 00:14:28.755 "ns_data": { 00:14:28.755 "can_share": true, 00:14:28.755 "id": 1 00:14:28.755 }, 00:14:28.755 "trid": { 00:14:28.755 "adrfam": "IPv4", 00:14:28.755 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:28.755 "traddr": "10.0.0.2", 00:14:28.755 "trsvcid": "4420", 00:14:28.755 "trtype": "TCP" 00:14:28.755 }, 00:14:28.755 "vs": { 00:14:28.755 "nvme_version": "1.3" 00:14:28.755 } 00:14:28.755 } 00:14:28.755 ] 00:14:28.755 }, 00:14:28.755 "name": "Nvme0n1", 00:14:28.755 "num_blocks": 38912, 00:14:28.755 "product_name": "NVMe disk", 00:14:28.755 "supported_io_types": { 00:14:28.755 "abort": true, 00:14:28.755 "compare": true, 00:14:28.755 "compare_and_write": true, 00:14:28.755 "flush": true, 00:14:28.755 "nvme_admin": true, 00:14:28.755 "nvme_io": true, 00:14:28.755 "read": true, 00:14:28.755 "reset": true, 00:14:28.755 "unmap": true, 00:14:28.755 "write": true, 00:14:28.755 "write_zeroes": true 00:14:28.755 }, 00:14:28.755 "uuid": "f1bf2e54-9eaf-4a30-ac45-0e3f163206e9", 00:14:28.755 "zoned": false 00:14:28.755 } 00:14:28.755 ] 00:14:28.755 04:10:30 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83979 00:14:28.755 04:10:30 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:28.755 04:10:30 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:28.755 Running I/O for 10 seconds... 00:14:30.132 Latency(us) 00:14:30.132 [2024-11-26T04:10:31.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.132 [2024-11-26T04:10:31.900Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.132 Nvme0n1 : 1.00 9670.00 37.77 0.00 0.00 0.00 0.00 0.00 00:14:30.132 [2024-11-26T04:10:31.900Z] =================================================================================================================== 00:14:30.132 [2024-11-26T04:10:31.900Z] Total : 9670.00 37.77 0.00 0.00 0.00 0.00 0.00 00:14:30.132 00:14:30.701 04:10:32 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a85d915a-04f4-4785-8b3f-20b71ea227d9 00:14:30.961 [2024-11-26T04:10:32.729Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.961 Nvme0n1 : 2.00 9612.50 37.55 0.00 0.00 0.00 0.00 0.00 00:14:30.961 [2024-11-26T04:10:32.729Z] =================================================================================================================== 00:14:30.961 [2024-11-26T04:10:32.729Z] Total : 9612.50 37.55 0.00 0.00 0.00 0.00 0.00 00:14:30.961 00:14:31.220 true 00:14:31.220 04:10:32 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a85d915a-04f4-4785-8b3f-20b71ea227d9 00:14:31.220 04:10:32 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:31.479 04:10:33 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:31.479 04:10:33 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:31.479 04:10:33 -- target/nvmf_lvs_grow.sh@65 -- # wait 83979 00:14:32.046 [2024-11-26T04:10:33.814Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.046 Nvme0n1 : 3.00 9487.00 37.06 0.00 0.00 0.00 0.00 0.00 00:14:32.046 [2024-11-26T04:10:33.814Z] =================================================================================================================== 00:14:32.046 [2024-11-26T04:10:33.814Z] Total : 9487.00 37.06 0.00 0.00 0.00 0.00 0.00 00:14:32.046 00:14:32.982 [2024-11-26T04:10:34.750Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.982 Nvme0n1 : 4.00 9406.50 36.74 0.00 0.00 0.00 0.00 0.00 00:14:32.982 [2024-11-26T04:10:34.750Z] =================================================================================================================== 00:14:32.982 [2024-11-26T04:10:34.750Z] Total : 9406.50 36.74 0.00 0.00 0.00 0.00 0.00 00:14:32.982 00:14:33.917 [2024-11-26T04:10:35.685Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.917 Nvme0n1 : 5.00 9399.80 36.72 0.00 0.00 0.00 0.00 0.00 00:14:33.917 [2024-11-26T04:10:35.685Z] =================================================================================================================== 00:14:33.917 [2024-11-26T04:10:35.685Z] Total : 9399.80 36.72 0.00 0.00 0.00 0.00 0.00 00:14:33.917 00:14:34.851 [2024-11-26T04:10:36.619Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.851 Nvme0n1 : 6.00 9363.33 36.58 0.00 0.00 0.00 0.00 0.00 00:14:34.851 [2024-11-26T04:10:36.619Z] =================================================================================================================== 00:14:34.851 [2024-11-26T04:10:36.619Z] Total : 9363.33 36.58 0.00 0.00 0.00 0.00 0.00 00:14:34.851 00:14:35.784 [2024-11-26T04:10:37.552Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.784 Nvme0n1 : 7.00 9147.57 35.73 0.00 0.00 0.00 0.00 0.00 00:14:35.784 [2024-11-26T04:10:37.552Z] =================================================================================================================== 00:14:35.784 [2024-11-26T04:10:37.553Z] Total : 9147.57 35.73 0.00 0.00 0.00 0.00 0.00 00:14:35.785 00:14:37.160 [2024-11-26T04:10:38.928Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.160 Nvme0n1 : 8.00 9114.88 35.60 0.00 0.00 0.00 0.00 0.00 00:14:37.160 [2024-11-26T04:10:38.928Z] =================================================================================================================== 00:14:37.160 [2024-11-26T04:10:38.928Z] Total : 9114.88 35.60 0.00 0.00 0.00 0.00 0.00 00:14:37.160 00:14:38.095 [2024-11-26T04:10:39.863Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.095 Nvme0n1 : 9.00 9105.89 35.57 0.00 0.00 0.00 0.00 0.00 00:14:38.095 [2024-11-26T04:10:39.863Z] =================================================================================================================== 00:14:38.095 [2024-11-26T04:10:39.863Z] Total : 9105.89 35.57 0.00 0.00 0.00 0.00 0.00 00:14:38.095 00:14:39.032 [2024-11-26T04:10:40.800Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.032 Nvme0n1 : 10.00 9091.30 35.51 0.00 0.00 0.00 0.00 0.00 00:14:39.032 [2024-11-26T04:10:40.800Z] =================================================================================================================== 00:14:39.032 [2024-11-26T04:10:40.800Z] Total : 9091.30 35.51 0.00 0.00 0.00 0.00 0.00 00:14:39.032 00:14:39.032 00:14:39.032 Latency(us) 00:14:39.032 [2024-11-26T04:10:40.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.032 [2024-11-26T04:10:40.800Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.032 Nvme0n1 : 10.01 9091.16 35.51 0.00 0.00 14071.16 6553.60 153473.40 00:14:39.032 [2024-11-26T04:10:40.800Z] =================================================================================================================== 00:14:39.032 [2024-11-26T04:10:40.800Z] Total : 9091.16 35.51 0.00 0.00 14071.16 6553.60 153473.40 00:14:39.032 0 00:14:39.032 04:10:40 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83928 00:14:39.032 04:10:40 -- common/autotest_common.sh@936 -- # '[' -z 83928 ']' 00:14:39.032 04:10:40 -- common/autotest_common.sh@940 -- # kill -0 83928 00:14:39.032 04:10:40 -- common/autotest_common.sh@941 -- # uname 00:14:39.032 04:10:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:39.032 04:10:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83928 00:14:39.032 04:10:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:39.032 04:10:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:39.032 killing process with pid 83928 00:14:39.032 04:10:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83928' 00:14:39.032 Received shutdown signal, test time was about 10.000000 seconds 00:14:39.032 00:14:39.032 Latency(us) 00:14:39.032 [2024-11-26T04:10:40.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.032 [2024-11-26T04:10:40.800Z] =================================================================================================================== 00:14:39.032 [2024-11-26T04:10:40.800Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:39.032 04:10:40 -- common/autotest_common.sh@955 -- # kill 83928 00:14:39.032 04:10:40 -- common/autotest_common.sh@960 -- # wait 83928 00:14:39.032 04:10:40 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:39.290 04:10:41 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a85d915a-04f4-4785-8b3f-20b71ea227d9 00:14:39.290 04:10:41 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:39.549 04:10:41 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:39.549 04:10:41 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:39.549 04:10:41 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:39.808 [2024-11-26 04:10:41.460220] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:39.808 04:10:41 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a85d915a-04f4-4785-8b3f-20b71ea227d9 00:14:39.808 04:10:41 -- common/autotest_common.sh@650 -- # local es=0 00:14:39.808 04:10:41 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a85d915a-04f4-4785-8b3f-20b71ea227d9 00:14:39.808 04:10:41 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:39.808 04:10:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:39.808 04:10:41 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:39.808 04:10:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:39.808 04:10:41 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:39.808 04:10:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:39.808 04:10:41 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:39.808 04:10:41 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:39.808 04:10:41 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a85d915a-04f4-4785-8b3f-20b71ea227d9 00:14:40.067 2024/11/26 04:10:41 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:a85d915a-04f4-4785-8b3f-20b71ea227d9], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:40.067 request: 00:14:40.067 { 00:14:40.067 "method": "bdev_lvol_get_lvstores", 00:14:40.067 "params": { 00:14:40.067 "uuid": "a85d915a-04f4-4785-8b3f-20b71ea227d9" 00:14:40.067 } 00:14:40.067 } 00:14:40.067 Got JSON-RPC error response 00:14:40.067 GoRPCClient: error on JSON-RPC call 00:14:40.067 04:10:41 -- common/autotest_common.sh@653 -- # es=1 00:14:40.067 04:10:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:40.067 04:10:41 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:40.067 04:10:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:40.067 04:10:41 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:40.327 aio_bdev 00:14:40.327 04:10:42 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev f1bf2e54-9eaf-4a30-ac45-0e3f163206e9 00:14:40.327 04:10:42 -- common/autotest_common.sh@897 -- # local bdev_name=f1bf2e54-9eaf-4a30-ac45-0e3f163206e9 00:14:40.327 04:10:42 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:40.327 04:10:42 -- common/autotest_common.sh@899 -- # local i 00:14:40.327 04:10:42 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:40.327 04:10:42 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:40.327 04:10:42 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:40.586 04:10:42 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f1bf2e54-9eaf-4a30-ac45-0e3f163206e9 -t 2000 00:14:40.845 [ 00:14:40.845 { 00:14:40.845 "aliases": [ 00:14:40.845 "lvs/lvol" 00:14:40.845 ], 00:14:40.845 "assigned_rate_limits": { 00:14:40.845 "r_mbytes_per_sec": 0, 00:14:40.845 "rw_ios_per_sec": 0, 00:14:40.845 "rw_mbytes_per_sec": 0, 00:14:40.845 "w_mbytes_per_sec": 0 00:14:40.845 }, 00:14:40.845 "block_size": 4096, 00:14:40.845 "claimed": false, 00:14:40.845 "driver_specific": { 00:14:40.845 "lvol": { 00:14:40.845 "base_bdev": "aio_bdev", 00:14:40.845 "clone": false, 00:14:40.845 "esnap_clone": false, 00:14:40.845 "lvol_store_uuid": "a85d915a-04f4-4785-8b3f-20b71ea227d9", 00:14:40.845 "snapshot": false, 00:14:40.845 "thin_provision": false 00:14:40.845 } 00:14:40.845 }, 00:14:40.845 "name": "f1bf2e54-9eaf-4a30-ac45-0e3f163206e9", 00:14:40.845 "num_blocks": 38912, 00:14:40.845 "product_name": "Logical Volume", 00:14:40.845 "supported_io_types": { 00:14:40.845 "abort": false, 00:14:40.845 "compare": false, 00:14:40.845 "compare_and_write": false, 00:14:40.845 "flush": false, 00:14:40.845 "nvme_admin": false, 00:14:40.845 "nvme_io": false, 00:14:40.845 "read": true, 00:14:40.845 "reset": true, 00:14:40.845 "unmap": true, 00:14:40.845 "write": true, 00:14:40.845 "write_zeroes": true 00:14:40.845 }, 00:14:40.845 "uuid": "f1bf2e54-9eaf-4a30-ac45-0e3f163206e9", 00:14:40.845 "zoned": false 00:14:40.845 } 00:14:40.845 ] 00:14:40.845 04:10:42 -- common/autotest_common.sh@905 -- # return 0 00:14:40.845 04:10:42 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a85d915a-04f4-4785-8b3f-20b71ea227d9 00:14:40.845 04:10:42 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:41.104 04:10:42 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:41.104 04:10:42 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a85d915a-04f4-4785-8b3f-20b71ea227d9 00:14:41.104 04:10:42 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:41.362 04:10:42 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:41.362 04:10:42 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f1bf2e54-9eaf-4a30-ac45-0e3f163206e9 00:14:41.620 04:10:43 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a85d915a-04f4-4785-8b3f-20b71ea227d9 00:14:41.879 04:10:43 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:42.138 04:10:43 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:42.397 ************************************ 00:14:42.397 END TEST lvs_grow_clean 00:14:42.397 ************************************ 00:14:42.397 00:14:42.397 real 0m17.681s 00:14:42.397 user 0m17.059s 00:14:42.397 sys 0m2.172s 00:14:42.397 04:10:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:42.397 04:10:44 -- common/autotest_common.sh@10 -- # set +x 00:14:42.397 04:10:44 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:42.397 04:10:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:42.397 04:10:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:42.397 04:10:44 -- common/autotest_common.sh@10 -- # set +x 00:14:42.397 ************************************ 00:14:42.397 START TEST lvs_grow_dirty 00:14:42.397 ************************************ 00:14:42.397 04:10:44 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:14:42.397 04:10:44 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:42.397 04:10:44 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:42.397 04:10:44 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:42.397 04:10:44 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:42.397 04:10:44 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:42.397 04:10:44 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:42.397 04:10:44 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:42.397 04:10:44 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:42.397 04:10:44 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:42.656 04:10:44 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:42.656 04:10:44 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:42.915 04:10:44 -- target/nvmf_lvs_grow.sh@28 -- # lvs=052f4041-d9f8-4d5a-bede-4e3d1b6fb882 00:14:42.915 04:10:44 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 052f4041-d9f8-4d5a-bede-4e3d1b6fb882 00:14:42.915 04:10:44 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:43.173 04:10:44 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:43.173 04:10:44 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:43.173 04:10:44 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 052f4041-d9f8-4d5a-bede-4e3d1b6fb882 lvol 150 00:14:43.430 04:10:45 -- target/nvmf_lvs_grow.sh@33 -- # lvol=c1e28e92-08c9-470b-920f-5a9aa5ce5267 00:14:43.430 04:10:45 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:43.430 04:10:45 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:43.688 [2024-11-26 04:10:45.316760] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:43.688 [2024-11-26 04:10:45.316837] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:43.688 true 00:14:43.689 04:10:45 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 052f4041-d9f8-4d5a-bede-4e3d1b6fb882 00:14:43.689 04:10:45 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:43.947 04:10:45 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:43.947 04:10:45 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:44.206 04:10:45 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c1e28e92-08c9-470b-920f-5a9aa5ce5267 00:14:44.464 04:10:46 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:44.723 04:10:46 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:44.723 04:10:46 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=84370 00:14:44.723 04:10:46 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:44.982 04:10:46 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:44.982 04:10:46 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 84370 /var/tmp/bdevperf.sock 00:14:44.982 04:10:46 -- common/autotest_common.sh@829 -- # '[' -z 84370 ']' 00:14:44.982 04:10:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:44.982 04:10:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:44.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:44.982 04:10:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:44.982 04:10:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:44.982 04:10:46 -- common/autotest_common.sh@10 -- # set +x 00:14:44.982 [2024-11-26 04:10:46.522062] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:44.982 [2024-11-26 04:10:46.522626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84370 ] 00:14:44.982 [2024-11-26 04:10:46.655823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.982 [2024-11-26 04:10:46.712288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.919 04:10:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.919 04:10:47 -- common/autotest_common.sh@862 -- # return 0 00:14:45.919 04:10:47 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:46.178 Nvme0n1 00:14:46.178 04:10:47 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:46.178 [ 00:14:46.178 { 00:14:46.178 "aliases": [ 00:14:46.178 "c1e28e92-08c9-470b-920f-5a9aa5ce5267" 00:14:46.178 ], 00:14:46.178 "assigned_rate_limits": { 00:14:46.178 "r_mbytes_per_sec": 0, 00:14:46.178 "rw_ios_per_sec": 0, 00:14:46.178 "rw_mbytes_per_sec": 0, 00:14:46.178 "w_mbytes_per_sec": 0 00:14:46.178 }, 00:14:46.178 "block_size": 4096, 00:14:46.178 "claimed": false, 00:14:46.178 "driver_specific": { 00:14:46.178 "mp_policy": "active_passive", 00:14:46.178 "nvme": [ 00:14:46.178 { 00:14:46.178 "ctrlr_data": { 00:14:46.178 "ana_reporting": false, 00:14:46.178 "cntlid": 1, 00:14:46.178 "firmware_revision": "24.01.1", 00:14:46.178 "model_number": "SPDK bdev Controller", 00:14:46.178 "multi_ctrlr": true, 00:14:46.178 "oacs": { 00:14:46.178 "firmware": 0, 00:14:46.178 "format": 0, 00:14:46.178 "ns_manage": 0, 00:14:46.178 "security": 0 00:14:46.178 }, 00:14:46.178 "serial_number": "SPDK0", 00:14:46.178 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:46.178 "vendor_id": "0x8086" 00:14:46.178 }, 00:14:46.178 "ns_data": { 00:14:46.178 "can_share": true, 00:14:46.178 "id": 1 00:14:46.178 }, 00:14:46.178 "trid": { 00:14:46.178 "adrfam": "IPv4", 00:14:46.178 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:46.178 "traddr": "10.0.0.2", 00:14:46.178 "trsvcid": "4420", 00:14:46.178 "trtype": "TCP" 00:14:46.178 }, 00:14:46.178 "vs": { 00:14:46.178 "nvme_version": "1.3" 00:14:46.178 } 00:14:46.178 } 00:14:46.178 ] 00:14:46.178 }, 00:14:46.178 "name": "Nvme0n1", 00:14:46.178 "num_blocks": 38912, 00:14:46.178 "product_name": "NVMe disk", 00:14:46.178 "supported_io_types": { 00:14:46.178 "abort": true, 00:14:46.178 "compare": true, 00:14:46.178 "compare_and_write": true, 00:14:46.178 "flush": true, 00:14:46.178 "nvme_admin": true, 00:14:46.178 "nvme_io": true, 00:14:46.178 "read": true, 00:14:46.178 "reset": true, 00:14:46.178 "unmap": true, 00:14:46.178 "write": true, 00:14:46.178 "write_zeroes": true 00:14:46.178 }, 00:14:46.178 "uuid": "c1e28e92-08c9-470b-920f-5a9aa5ce5267", 00:14:46.178 "zoned": false 00:14:46.178 } 00:14:46.178 ] 00:14:46.437 04:10:47 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:46.437 04:10:47 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=84413 00:14:46.437 04:10:47 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:46.437 Running I/O for 10 seconds... 00:14:47.372 Latency(us) 00:14:47.372 [2024-11-26T04:10:49.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.372 [2024-11-26T04:10:49.140Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.372 Nvme0n1 : 1.00 9530.00 37.23 0.00 0.00 0.00 0.00 0.00 00:14:47.372 [2024-11-26T04:10:49.140Z] =================================================================================================================== 00:14:47.372 [2024-11-26T04:10:49.140Z] Total : 9530.00 37.23 0.00 0.00 0.00 0.00 0.00 00:14:47.372 00:14:48.316 04:10:49 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 052f4041-d9f8-4d5a-bede-4e3d1b6fb882 00:14:48.316 [2024-11-26T04:10:50.084Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.316 Nvme0n1 : 2.00 9410.00 36.76 0.00 0.00 0.00 0.00 0.00 00:14:48.316 [2024-11-26T04:10:50.084Z] =================================================================================================================== 00:14:48.316 [2024-11-26T04:10:50.084Z] Total : 9410.00 36.76 0.00 0.00 0.00 0.00 0.00 00:14:48.316 00:14:48.575 true 00:14:48.575 04:10:50 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 052f4041-d9f8-4d5a-bede-4e3d1b6fb882 00:14:48.575 04:10:50 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:49.148 04:10:50 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:49.148 04:10:50 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:49.148 04:10:50 -- target/nvmf_lvs_grow.sh@65 -- # wait 84413 00:14:49.423 [2024-11-26T04:10:51.191Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:49.423 Nvme0n1 : 3.00 9443.33 36.89 0.00 0.00 0.00 0.00 0.00 00:14:49.423 [2024-11-26T04:10:51.191Z] =================================================================================================================== 00:14:49.423 [2024-11-26T04:10:51.191Z] Total : 9443.33 36.89 0.00 0.00 0.00 0.00 0.00 00:14:49.423 00:14:50.374 [2024-11-26T04:10:52.142Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.374 Nvme0n1 : 4.00 9205.00 35.96 0.00 0.00 0.00 0.00 0.00 00:14:50.374 [2024-11-26T04:10:52.142Z] =================================================================================================================== 00:14:50.374 [2024-11-26T04:10:52.142Z] Total : 9205.00 35.96 0.00 0.00 0.00 0.00 0.00 00:14:50.374 00:14:51.310 [2024-11-26T04:10:53.078Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.310 Nvme0n1 : 5.00 9184.20 35.88 0.00 0.00 0.00 0.00 0.00 00:14:51.310 [2024-11-26T04:10:53.078Z] =================================================================================================================== 00:14:51.310 [2024-11-26T04:10:53.078Z] Total : 9184.20 35.88 0.00 0.00 0.00 0.00 0.00 00:14:51.310 00:14:52.689 [2024-11-26T04:10:54.457Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.689 Nvme0n1 : 6.00 9231.33 36.06 0.00 0.00 0.00 0.00 0.00 00:14:52.689 [2024-11-26T04:10:54.457Z] =================================================================================================================== 00:14:52.689 [2024-11-26T04:10:54.457Z] Total : 9231.33 36.06 0.00 0.00 0.00 0.00 0.00 00:14:52.689 00:14:53.625 [2024-11-26T04:10:55.393Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.625 Nvme0n1 : 7.00 9251.43 36.14 0.00 0.00 0.00 0.00 0.00 00:14:53.625 [2024-11-26T04:10:55.393Z] =================================================================================================================== 00:14:53.625 [2024-11-26T04:10:55.393Z] Total : 9251.43 36.14 0.00 0.00 0.00 0.00 0.00 00:14:53.625 00:14:54.561 [2024-11-26T04:10:56.329Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.561 Nvme0n1 : 8.00 9067.25 35.42 0.00 0.00 0.00 0.00 0.00 00:14:54.561 [2024-11-26T04:10:56.329Z] =================================================================================================================== 00:14:54.561 [2024-11-26T04:10:56.329Z] Total : 9067.25 35.42 0.00 0.00 0.00 0.00 0.00 00:14:54.561 00:14:55.499 [2024-11-26T04:10:57.267Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.499 Nvme0n1 : 9.00 9031.00 35.28 0.00 0.00 0.00 0.00 0.00 00:14:55.499 [2024-11-26T04:10:57.267Z] =================================================================================================================== 00:14:55.499 [2024-11-26T04:10:57.267Z] Total : 9031.00 35.28 0.00 0.00 0.00 0.00 0.00 00:14:55.499 00:14:56.437 [2024-11-26T04:10:58.205Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.437 Nvme0n1 : 10.00 8991.50 35.12 0.00 0.00 0.00 0.00 0.00 00:14:56.437 [2024-11-26T04:10:58.205Z] =================================================================================================================== 00:14:56.437 [2024-11-26T04:10:58.205Z] Total : 8991.50 35.12 0.00 0.00 0.00 0.00 0.00 00:14:56.437 00:14:56.437 00:14:56.437 Latency(us) 00:14:56.437 [2024-11-26T04:10:58.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.437 [2024-11-26T04:10:58.205Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.437 Nvme0n1 : 10.01 8996.74 35.14 0.00 0.00 14223.39 3425.75 111053.73 00:14:56.437 [2024-11-26T04:10:58.205Z] =================================================================================================================== 00:14:56.437 [2024-11-26T04:10:58.205Z] Total : 8996.74 35.14 0.00 0.00 14223.39 3425.75 111053.73 00:14:56.437 0 00:14:56.437 04:10:58 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 84370 00:14:56.437 04:10:58 -- common/autotest_common.sh@936 -- # '[' -z 84370 ']' 00:14:56.437 04:10:58 -- common/autotest_common.sh@940 -- # kill -0 84370 00:14:56.437 04:10:58 -- common/autotest_common.sh@941 -- # uname 00:14:56.437 04:10:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:56.437 04:10:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84370 00:14:56.437 04:10:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:56.437 04:10:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:56.437 killing process with pid 84370 00:14:56.437 04:10:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84370' 00:14:56.437 Received shutdown signal, test time was about 10.000000 seconds 00:14:56.437 00:14:56.437 Latency(us) 00:14:56.437 [2024-11-26T04:10:58.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.437 [2024-11-26T04:10:58.205Z] =================================================================================================================== 00:14:56.437 [2024-11-26T04:10:58.205Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:56.437 04:10:58 -- common/autotest_common.sh@955 -- # kill 84370 00:14:56.437 04:10:58 -- common/autotest_common.sh@960 -- # wait 84370 00:14:56.696 04:10:58 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:56.955 04:10:58 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:56.955 04:10:58 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 052f4041-d9f8-4d5a-bede-4e3d1b6fb882 00:14:57.214 04:10:58 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:57.214 04:10:58 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:57.214 04:10:58 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 83770 00:14:57.214 04:10:58 -- target/nvmf_lvs_grow.sh@74 -- # wait 83770 00:14:57.214 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 83770 Killed "${NVMF_APP[@]}" "$@" 00:14:57.214 04:10:58 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:57.214 04:10:58 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:57.214 04:10:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:57.214 04:10:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:57.214 04:10:58 -- common/autotest_common.sh@10 -- # set +x 00:14:57.214 04:10:58 -- nvmf/common.sh@469 -- # nvmfpid=84570 00:14:57.214 04:10:58 -- nvmf/common.sh@470 -- # waitforlisten 84570 00:14:57.214 04:10:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:57.214 04:10:58 -- common/autotest_common.sh@829 -- # '[' -z 84570 ']' 00:14:57.214 04:10:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.214 04:10:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:57.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.214 04:10:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.214 04:10:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.214 04:10:58 -- common/autotest_common.sh@10 -- # set +x 00:14:57.214 [2024-11-26 04:10:58.931258] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:57.214 [2024-11-26 04:10:58.931346] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.473 [2024-11-26 04:10:59.062835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.473 [2024-11-26 04:10:59.135152] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:57.473 [2024-11-26 04:10:59.135292] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.473 [2024-11-26 04:10:59.135306] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.473 [2024-11-26 04:10:59.135315] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.473 [2024-11-26 04:10:59.135347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.411 04:10:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:58.411 04:10:59 -- common/autotest_common.sh@862 -- # return 0 00:14:58.411 04:10:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:58.411 04:10:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:58.411 04:10:59 -- common/autotest_common.sh@10 -- # set +x 00:14:58.411 04:10:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.411 04:10:59 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:58.669 [2024-11-26 04:11:00.259685] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:58.669 [2024-11-26 04:11:00.260127] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:58.669 [2024-11-26 04:11:00.260297] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:58.669 04:11:00 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:58.669 04:11:00 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev c1e28e92-08c9-470b-920f-5a9aa5ce5267 00:14:58.669 04:11:00 -- common/autotest_common.sh@897 -- # local bdev_name=c1e28e92-08c9-470b-920f-5a9aa5ce5267 00:14:58.669 04:11:00 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:58.669 04:11:00 -- common/autotest_common.sh@899 -- # local i 00:14:58.669 04:11:00 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:58.669 04:11:00 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:58.669 04:11:00 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:58.928 04:11:00 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c1e28e92-08c9-470b-920f-5a9aa5ce5267 -t 2000 00:14:59.187 [ 00:14:59.187 { 00:14:59.187 "aliases": [ 00:14:59.187 "lvs/lvol" 00:14:59.187 ], 00:14:59.187 "assigned_rate_limits": { 00:14:59.187 "r_mbytes_per_sec": 0, 00:14:59.187 "rw_ios_per_sec": 0, 00:14:59.187 "rw_mbytes_per_sec": 0, 00:14:59.187 "w_mbytes_per_sec": 0 00:14:59.187 }, 00:14:59.187 "block_size": 4096, 00:14:59.187 "claimed": false, 00:14:59.187 "driver_specific": { 00:14:59.187 "lvol": { 00:14:59.187 "base_bdev": "aio_bdev", 00:14:59.187 "clone": false, 00:14:59.187 "esnap_clone": false, 00:14:59.187 "lvol_store_uuid": "052f4041-d9f8-4d5a-bede-4e3d1b6fb882", 00:14:59.187 "snapshot": false, 00:14:59.187 "thin_provision": false 00:14:59.187 } 00:14:59.187 }, 00:14:59.187 "name": "c1e28e92-08c9-470b-920f-5a9aa5ce5267", 00:14:59.187 "num_blocks": 38912, 00:14:59.187 "product_name": "Logical Volume", 00:14:59.187 "supported_io_types": { 00:14:59.187 "abort": false, 00:14:59.187 "compare": false, 00:14:59.187 "compare_and_write": false, 00:14:59.187 "flush": false, 00:14:59.187 "nvme_admin": false, 00:14:59.187 "nvme_io": false, 00:14:59.187 "read": true, 00:14:59.187 "reset": true, 00:14:59.187 "unmap": true, 00:14:59.187 "write": true, 00:14:59.187 "write_zeroes": true 00:14:59.187 }, 00:14:59.187 "uuid": "c1e28e92-08c9-470b-920f-5a9aa5ce5267", 00:14:59.187 "zoned": false 00:14:59.187 } 00:14:59.187 ] 00:14:59.187 04:11:00 -- common/autotest_common.sh@905 -- # return 0 00:14:59.187 04:11:00 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 052f4041-d9f8-4d5a-bede-4e3d1b6fb882 00:14:59.187 04:11:00 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:14:59.446 04:11:01 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:14:59.446 04:11:01 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 052f4041-d9f8-4d5a-bede-4e3d1b6fb882 00:14:59.446 04:11:01 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:14:59.704 04:11:01 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:14:59.704 04:11:01 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:59.962 [2024-11-26 04:11:01.537144] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:59.962 04:11:01 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 052f4041-d9f8-4d5a-bede-4e3d1b6fb882 00:14:59.962 04:11:01 -- common/autotest_common.sh@650 -- # local es=0 00:14:59.962 04:11:01 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 052f4041-d9f8-4d5a-bede-4e3d1b6fb882 00:14:59.962 04:11:01 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:59.962 04:11:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.962 04:11:01 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:59.962 04:11:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.962 04:11:01 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:59.962 04:11:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.962 04:11:01 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:59.962 04:11:01 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:59.962 04:11:01 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 052f4041-d9f8-4d5a-bede-4e3d1b6fb882 00:15:00.221 2024/11/26 04:11:01 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:052f4041-d9f8-4d5a-bede-4e3d1b6fb882], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:15:00.221 request: 00:15:00.221 { 00:15:00.221 "method": "bdev_lvol_get_lvstores", 00:15:00.221 "params": { 00:15:00.221 "uuid": "052f4041-d9f8-4d5a-bede-4e3d1b6fb882" 00:15:00.221 } 00:15:00.221 } 00:15:00.221 Got JSON-RPC error response 00:15:00.221 GoRPCClient: error on JSON-RPC call 00:15:00.221 04:11:01 -- common/autotest_common.sh@653 -- # es=1 00:15:00.221 04:11:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:00.221 04:11:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:00.221 04:11:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:00.221 04:11:01 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:00.479 aio_bdev 00:15:00.479 04:11:02 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev c1e28e92-08c9-470b-920f-5a9aa5ce5267 00:15:00.479 04:11:02 -- common/autotest_common.sh@897 -- # local bdev_name=c1e28e92-08c9-470b-920f-5a9aa5ce5267 00:15:00.479 04:11:02 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:00.479 04:11:02 -- common/autotest_common.sh@899 -- # local i 00:15:00.479 04:11:02 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:00.479 04:11:02 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:00.479 04:11:02 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:00.737 04:11:02 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c1e28e92-08c9-470b-920f-5a9aa5ce5267 -t 2000 00:15:00.737 [ 00:15:00.737 { 00:15:00.737 "aliases": [ 00:15:00.737 "lvs/lvol" 00:15:00.737 ], 00:15:00.737 "assigned_rate_limits": { 00:15:00.737 "r_mbytes_per_sec": 0, 00:15:00.737 "rw_ios_per_sec": 0, 00:15:00.738 "rw_mbytes_per_sec": 0, 00:15:00.738 "w_mbytes_per_sec": 0 00:15:00.738 }, 00:15:00.738 "block_size": 4096, 00:15:00.738 "claimed": false, 00:15:00.738 "driver_specific": { 00:15:00.738 "lvol": { 00:15:00.738 "base_bdev": "aio_bdev", 00:15:00.738 "clone": false, 00:15:00.738 "esnap_clone": false, 00:15:00.738 "lvol_store_uuid": "052f4041-d9f8-4d5a-bede-4e3d1b6fb882", 00:15:00.738 "snapshot": false, 00:15:00.738 "thin_provision": false 00:15:00.738 } 00:15:00.738 }, 00:15:00.738 "name": "c1e28e92-08c9-470b-920f-5a9aa5ce5267", 00:15:00.738 "num_blocks": 38912, 00:15:00.738 "product_name": "Logical Volume", 00:15:00.738 "supported_io_types": { 00:15:00.738 "abort": false, 00:15:00.738 "compare": false, 00:15:00.738 "compare_and_write": false, 00:15:00.738 "flush": false, 00:15:00.738 "nvme_admin": false, 00:15:00.738 "nvme_io": false, 00:15:00.738 "read": true, 00:15:00.738 "reset": true, 00:15:00.738 "unmap": true, 00:15:00.738 "write": true, 00:15:00.738 "write_zeroes": true 00:15:00.738 }, 00:15:00.738 "uuid": "c1e28e92-08c9-470b-920f-5a9aa5ce5267", 00:15:00.738 "zoned": false 00:15:00.738 } 00:15:00.738 ] 00:15:00.738 04:11:02 -- common/autotest_common.sh@905 -- # return 0 00:15:00.738 04:11:02 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 052f4041-d9f8-4d5a-bede-4e3d1b6fb882 00:15:00.738 04:11:02 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:00.996 04:11:02 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:00.996 04:11:02 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:00.996 04:11:02 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 052f4041-d9f8-4d5a-bede-4e3d1b6fb882 00:15:01.255 04:11:03 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:01.255 04:11:03 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c1e28e92-08c9-470b-920f-5a9aa5ce5267 00:15:01.514 04:11:03 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 052f4041-d9f8-4d5a-bede-4e3d1b6fb882 00:15:02.082 04:11:03 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:02.082 04:11:03 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:02.341 00:15:02.341 real 0m19.970s 00:15:02.341 user 0m40.408s 00:15:02.341 sys 0m8.273s 00:15:02.341 04:11:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:02.341 04:11:04 -- common/autotest_common.sh@10 -- # set +x 00:15:02.341 ************************************ 00:15:02.341 END TEST lvs_grow_dirty 00:15:02.341 ************************************ 00:15:02.599 04:11:04 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:02.600 04:11:04 -- common/autotest_common.sh@806 -- # type=--id 00:15:02.600 04:11:04 -- common/autotest_common.sh@807 -- # id=0 00:15:02.600 04:11:04 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:02.600 04:11:04 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:02.600 04:11:04 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:02.600 04:11:04 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:02.600 04:11:04 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:02.600 04:11:04 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:02.600 nvmf_trace.0 00:15:02.600 04:11:04 -- common/autotest_common.sh@821 -- # return 0 00:15:02.600 04:11:04 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:02.600 04:11:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:02.600 04:11:04 -- nvmf/common.sh@116 -- # sync 00:15:02.859 04:11:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:02.859 04:11:04 -- nvmf/common.sh@119 -- # set +e 00:15:02.859 04:11:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:02.859 04:11:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:02.859 rmmod nvme_tcp 00:15:02.859 rmmod nvme_fabrics 00:15:03.118 rmmod nvme_keyring 00:15:03.118 04:11:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:03.118 04:11:04 -- nvmf/common.sh@123 -- # set -e 00:15:03.118 04:11:04 -- nvmf/common.sh@124 -- # return 0 00:15:03.118 04:11:04 -- nvmf/common.sh@477 -- # '[' -n 84570 ']' 00:15:03.118 04:11:04 -- nvmf/common.sh@478 -- # killprocess 84570 00:15:03.118 04:11:04 -- common/autotest_common.sh@936 -- # '[' -z 84570 ']' 00:15:03.118 04:11:04 -- common/autotest_common.sh@940 -- # kill -0 84570 00:15:03.118 04:11:04 -- common/autotest_common.sh@941 -- # uname 00:15:03.118 04:11:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:03.118 04:11:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84570 00:15:03.118 04:11:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:03.118 04:11:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:03.118 killing process with pid 84570 00:15:03.118 04:11:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84570' 00:15:03.118 04:11:04 -- common/autotest_common.sh@955 -- # kill 84570 00:15:03.118 04:11:04 -- common/autotest_common.sh@960 -- # wait 84570 00:15:03.377 04:11:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:03.377 04:11:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:03.377 04:11:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:03.377 04:11:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:03.377 04:11:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:03.377 04:11:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.377 04:11:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:03.377 04:11:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.377 04:11:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:03.377 00:15:03.377 real 0m40.604s 00:15:03.377 user 1m4.181s 00:15:03.377 sys 0m11.465s 00:15:03.377 04:11:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:03.377 ************************************ 00:15:03.377 END TEST nvmf_lvs_grow 00:15:03.377 ************************************ 00:15:03.377 04:11:04 -- common/autotest_common.sh@10 -- # set +x 00:15:03.378 04:11:05 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:03.378 04:11:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:03.378 04:11:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:03.378 04:11:05 -- common/autotest_common.sh@10 -- # set +x 00:15:03.378 ************************************ 00:15:03.378 START TEST nvmf_bdev_io_wait 00:15:03.378 ************************************ 00:15:03.378 04:11:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:03.378 * Looking for test storage... 00:15:03.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:03.378 04:11:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:03.378 04:11:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:03.378 04:11:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:03.637 04:11:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:03.637 04:11:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:03.637 04:11:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:03.637 04:11:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:03.637 04:11:05 -- scripts/common.sh@335 -- # IFS=.-: 00:15:03.637 04:11:05 -- scripts/common.sh@335 -- # read -ra ver1 00:15:03.637 04:11:05 -- scripts/common.sh@336 -- # IFS=.-: 00:15:03.637 04:11:05 -- scripts/common.sh@336 -- # read -ra ver2 00:15:03.637 04:11:05 -- scripts/common.sh@337 -- # local 'op=<' 00:15:03.637 04:11:05 -- scripts/common.sh@339 -- # ver1_l=2 00:15:03.637 04:11:05 -- scripts/common.sh@340 -- # ver2_l=1 00:15:03.637 04:11:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:03.637 04:11:05 -- scripts/common.sh@343 -- # case "$op" in 00:15:03.637 04:11:05 -- scripts/common.sh@344 -- # : 1 00:15:03.637 04:11:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:03.637 04:11:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:03.637 04:11:05 -- scripts/common.sh@364 -- # decimal 1 00:15:03.637 04:11:05 -- scripts/common.sh@352 -- # local d=1 00:15:03.637 04:11:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:03.637 04:11:05 -- scripts/common.sh@354 -- # echo 1 00:15:03.637 04:11:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:03.637 04:11:05 -- scripts/common.sh@365 -- # decimal 2 00:15:03.637 04:11:05 -- scripts/common.sh@352 -- # local d=2 00:15:03.637 04:11:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:03.637 04:11:05 -- scripts/common.sh@354 -- # echo 2 00:15:03.637 04:11:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:03.637 04:11:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:03.637 04:11:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:03.637 04:11:05 -- scripts/common.sh@367 -- # return 0 00:15:03.637 04:11:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:03.637 04:11:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:03.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.637 --rc genhtml_branch_coverage=1 00:15:03.637 --rc genhtml_function_coverage=1 00:15:03.637 --rc genhtml_legend=1 00:15:03.637 --rc geninfo_all_blocks=1 00:15:03.637 --rc geninfo_unexecuted_blocks=1 00:15:03.637 00:15:03.637 ' 00:15:03.637 04:11:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:03.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.637 --rc genhtml_branch_coverage=1 00:15:03.637 --rc genhtml_function_coverage=1 00:15:03.637 --rc genhtml_legend=1 00:15:03.637 --rc geninfo_all_blocks=1 00:15:03.637 --rc geninfo_unexecuted_blocks=1 00:15:03.637 00:15:03.637 ' 00:15:03.637 04:11:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:03.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.638 --rc genhtml_branch_coverage=1 00:15:03.638 --rc genhtml_function_coverage=1 00:15:03.638 --rc genhtml_legend=1 00:15:03.638 --rc geninfo_all_blocks=1 00:15:03.638 --rc geninfo_unexecuted_blocks=1 00:15:03.638 00:15:03.638 ' 00:15:03.638 04:11:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:03.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.638 --rc genhtml_branch_coverage=1 00:15:03.638 --rc genhtml_function_coverage=1 00:15:03.638 --rc genhtml_legend=1 00:15:03.638 --rc geninfo_all_blocks=1 00:15:03.638 --rc geninfo_unexecuted_blocks=1 00:15:03.638 00:15:03.638 ' 00:15:03.638 04:11:05 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:03.638 04:11:05 -- nvmf/common.sh@7 -- # uname -s 00:15:03.638 04:11:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:03.638 04:11:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:03.638 04:11:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:03.638 04:11:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:03.638 04:11:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:03.638 04:11:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:03.638 04:11:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:03.638 04:11:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:03.638 04:11:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:03.638 04:11:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:03.638 04:11:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:15:03.638 04:11:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:15:03.638 04:11:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:03.638 04:11:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:03.638 04:11:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:03.638 04:11:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:03.638 04:11:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:03.638 04:11:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:03.638 04:11:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:03.638 04:11:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.638 04:11:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.638 04:11:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.638 04:11:05 -- paths/export.sh@5 -- # export PATH 00:15:03.638 04:11:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.638 04:11:05 -- nvmf/common.sh@46 -- # : 0 00:15:03.638 04:11:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:03.638 04:11:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:03.638 04:11:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:03.638 04:11:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:03.638 04:11:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:03.638 04:11:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:03.638 04:11:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:03.638 04:11:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:03.638 04:11:05 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:03.638 04:11:05 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:03.638 04:11:05 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:03.638 04:11:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:03.638 04:11:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:03.638 04:11:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:03.638 04:11:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:03.638 04:11:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:03.638 04:11:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.638 04:11:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:03.638 04:11:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.638 04:11:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:03.638 04:11:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:03.638 04:11:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:03.638 04:11:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:03.638 04:11:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:03.638 04:11:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:03.638 04:11:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:03.638 04:11:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:03.638 04:11:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:03.638 04:11:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:03.638 04:11:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:03.638 04:11:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:03.638 04:11:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:03.638 04:11:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:03.638 04:11:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:03.638 04:11:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:03.638 04:11:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:03.638 04:11:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:03.638 04:11:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:03.638 04:11:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:03.638 Cannot find device "nvmf_tgt_br" 00:15:03.638 04:11:05 -- nvmf/common.sh@154 -- # true 00:15:03.638 04:11:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:03.638 Cannot find device "nvmf_tgt_br2" 00:15:03.638 04:11:05 -- nvmf/common.sh@155 -- # true 00:15:03.638 04:11:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:03.638 04:11:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:03.638 Cannot find device "nvmf_tgt_br" 00:15:03.638 04:11:05 -- nvmf/common.sh@157 -- # true 00:15:03.638 04:11:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:03.638 Cannot find device "nvmf_tgt_br2" 00:15:03.638 04:11:05 -- nvmf/common.sh@158 -- # true 00:15:03.638 04:11:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:03.638 04:11:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:03.638 04:11:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:03.638 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:03.638 04:11:05 -- nvmf/common.sh@161 -- # true 00:15:03.638 04:11:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:03.638 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:03.638 04:11:05 -- nvmf/common.sh@162 -- # true 00:15:03.638 04:11:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:03.898 04:11:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:03.898 04:11:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:03.898 04:11:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:03.898 04:11:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:03.898 04:11:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:03.898 04:11:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:03.898 04:11:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:03.898 04:11:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:03.898 04:11:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:03.898 04:11:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:03.898 04:11:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:03.898 04:11:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:03.898 04:11:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:03.898 04:11:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:03.898 04:11:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:03.898 04:11:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:03.898 04:11:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:03.898 04:11:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:03.898 04:11:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:03.898 04:11:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:03.898 04:11:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:03.898 04:11:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:03.898 04:11:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:03.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:03.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:15:03.898 00:15:03.898 --- 10.0.0.2 ping statistics --- 00:15:03.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.898 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:03.898 04:11:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:03.898 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:03.898 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:15:03.898 00:15:03.898 --- 10.0.0.3 ping statistics --- 00:15:03.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.898 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:03.898 04:11:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:03.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:03.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:03.898 00:15:03.898 --- 10.0.0.1 ping statistics --- 00:15:03.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.898 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:03.898 04:11:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:03.898 04:11:05 -- nvmf/common.sh@421 -- # return 0 00:15:03.898 04:11:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:03.898 04:11:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:03.898 04:11:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:03.898 04:11:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:03.898 04:11:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:03.898 04:11:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:03.898 04:11:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:03.898 04:11:05 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:03.898 04:11:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:03.898 04:11:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:03.898 04:11:05 -- common/autotest_common.sh@10 -- # set +x 00:15:03.898 04:11:05 -- nvmf/common.sh@469 -- # nvmfpid=85000 00:15:03.898 04:11:05 -- nvmf/common.sh@470 -- # waitforlisten 85000 00:15:03.898 04:11:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:03.898 04:11:05 -- common/autotest_common.sh@829 -- # '[' -z 85000 ']' 00:15:03.898 04:11:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.898 04:11:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:03.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.898 04:11:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.898 04:11:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:03.898 04:11:05 -- common/autotest_common.sh@10 -- # set +x 00:15:03.898 [2024-11-26 04:11:05.647638] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:03.898 [2024-11-26 04:11:05.647748] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.158 [2024-11-26 04:11:05.788197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:04.158 [2024-11-26 04:11:05.865283] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:04.158 [2024-11-26 04:11:05.865455] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.158 [2024-11-26 04:11:05.865467] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.158 [2024-11-26 04:11:05.865475] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.158 [2024-11-26 04:11:05.865615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.158 [2024-11-26 04:11:05.865853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:04.158 [2024-11-26 04:11:05.866325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:04.158 [2024-11-26 04:11:05.866361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.158 04:11:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:04.158 04:11:05 -- common/autotest_common.sh@862 -- # return 0 00:15:04.158 04:11:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:04.158 04:11:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:04.158 04:11:05 -- common/autotest_common.sh@10 -- # set +x 00:15:04.418 04:11:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.418 04:11:05 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:04.418 04:11:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.418 04:11:05 -- common/autotest_common.sh@10 -- # set +x 00:15:04.418 04:11:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.418 04:11:05 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:04.418 04:11:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.418 04:11:05 -- common/autotest_common.sh@10 -- # set +x 00:15:04.418 04:11:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.418 04:11:06 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:04.418 04:11:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.418 04:11:06 -- common/autotest_common.sh@10 -- # set +x 00:15:04.418 [2024-11-26 04:11:06.045606] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.418 04:11:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.418 04:11:06 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:04.418 04:11:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.418 04:11:06 -- common/autotest_common.sh@10 -- # set +x 00:15:04.418 Malloc0 00:15:04.418 04:11:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.418 04:11:06 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:04.418 04:11:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.418 04:11:06 -- common/autotest_common.sh@10 -- # set +x 00:15:04.418 04:11:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.418 04:11:06 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:04.418 04:11:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.418 04:11:06 -- common/autotest_common.sh@10 -- # set +x 00:15:04.418 04:11:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.418 04:11:06 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.418 04:11:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.418 04:11:06 -- common/autotest_common.sh@10 -- # set +x 00:15:04.418 [2024-11-26 04:11:06.106289] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.418 04:11:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.418 04:11:06 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=85035 00:15:04.418 04:11:06 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:04.418 04:11:06 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:04.418 04:11:06 -- target/bdev_io_wait.sh@30 -- # READ_PID=85037 00:15:04.418 04:11:06 -- nvmf/common.sh@520 -- # config=() 00:15:04.418 04:11:06 -- nvmf/common.sh@520 -- # local subsystem config 00:15:04.418 04:11:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:04.418 04:11:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:04.418 { 00:15:04.418 "params": { 00:15:04.418 "name": "Nvme$subsystem", 00:15:04.418 "trtype": "$TEST_TRANSPORT", 00:15:04.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:04.418 "adrfam": "ipv4", 00:15:04.418 "trsvcid": "$NVMF_PORT", 00:15:04.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:04.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:04.418 "hdgst": ${hdgst:-false}, 00:15:04.418 "ddgst": ${ddgst:-false} 00:15:04.418 }, 00:15:04.418 "method": "bdev_nvme_attach_controller" 00:15:04.418 } 00:15:04.418 EOF 00:15:04.418 )") 00:15:04.418 04:11:06 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=85039 00:15:04.418 04:11:06 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:04.418 04:11:06 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:04.418 04:11:06 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:04.418 04:11:06 -- nvmf/common.sh@520 -- # config=() 00:15:04.418 04:11:06 -- nvmf/common.sh@520 -- # local subsystem config 00:15:04.418 04:11:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:04.418 04:11:06 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=85042 00:15:04.418 04:11:06 -- nvmf/common.sh@542 -- # cat 00:15:04.418 04:11:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:04.418 { 00:15:04.418 "params": { 00:15:04.418 "name": "Nvme$subsystem", 00:15:04.418 "trtype": "$TEST_TRANSPORT", 00:15:04.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:04.418 "adrfam": "ipv4", 00:15:04.418 "trsvcid": "$NVMF_PORT", 00:15:04.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:04.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:04.418 "hdgst": ${hdgst:-false}, 00:15:04.418 "ddgst": ${ddgst:-false} 00:15:04.418 }, 00:15:04.418 "method": "bdev_nvme_attach_controller" 00:15:04.418 } 00:15:04.418 EOF 00:15:04.419 )") 00:15:04.419 04:11:06 -- target/bdev_io_wait.sh@35 -- # sync 00:15:04.419 04:11:06 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:04.419 04:11:06 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:04.419 04:11:06 -- nvmf/common.sh@520 -- # config=() 00:15:04.419 04:11:06 -- nvmf/common.sh@520 -- # local subsystem config 00:15:04.419 04:11:06 -- nvmf/common.sh@542 -- # cat 00:15:04.419 04:11:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:04.419 04:11:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:04.419 { 00:15:04.419 "params": { 00:15:04.419 "name": "Nvme$subsystem", 00:15:04.419 "trtype": "$TEST_TRANSPORT", 00:15:04.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:04.419 "adrfam": "ipv4", 00:15:04.419 "trsvcid": "$NVMF_PORT", 00:15:04.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:04.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:04.419 "hdgst": ${hdgst:-false}, 00:15:04.419 "ddgst": ${ddgst:-false} 00:15:04.419 }, 00:15:04.419 "method": "bdev_nvme_attach_controller" 00:15:04.419 } 00:15:04.419 EOF 00:15:04.419 )") 00:15:04.419 04:11:06 -- nvmf/common.sh@542 -- # cat 00:15:04.419 04:11:06 -- nvmf/common.sh@544 -- # jq . 00:15:04.419 04:11:06 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:04.419 04:11:06 -- nvmf/common.sh@520 -- # config=() 00:15:04.419 04:11:06 -- nvmf/common.sh@520 -- # local subsystem config 00:15:04.419 04:11:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:04.419 04:11:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:04.419 { 00:15:04.419 "params": { 00:15:04.419 "name": "Nvme$subsystem", 00:15:04.419 "trtype": "$TEST_TRANSPORT", 00:15:04.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:04.419 "adrfam": "ipv4", 00:15:04.419 "trsvcid": "$NVMF_PORT", 00:15:04.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:04.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:04.419 "hdgst": ${hdgst:-false}, 00:15:04.419 "ddgst": ${ddgst:-false} 00:15:04.419 }, 00:15:04.419 "method": "bdev_nvme_attach_controller" 00:15:04.419 } 00:15:04.419 EOF 00:15:04.419 )") 00:15:04.419 04:11:06 -- nvmf/common.sh@545 -- # IFS=, 00:15:04.419 04:11:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:04.419 "params": { 00:15:04.419 "name": "Nvme1", 00:15:04.419 "trtype": "tcp", 00:15:04.419 "traddr": "10.0.0.2", 00:15:04.419 "adrfam": "ipv4", 00:15:04.419 "trsvcid": "4420", 00:15:04.419 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:04.419 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:04.419 "hdgst": false, 00:15:04.419 "ddgst": false 00:15:04.419 }, 00:15:04.419 "method": "bdev_nvme_attach_controller" 00:15:04.419 }' 00:15:04.419 04:11:06 -- nvmf/common.sh@544 -- # jq . 00:15:04.419 04:11:06 -- nvmf/common.sh@542 -- # cat 00:15:04.419 04:11:06 -- nvmf/common.sh@544 -- # jq . 00:15:04.419 04:11:06 -- nvmf/common.sh@545 -- # IFS=, 00:15:04.419 04:11:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:04.419 "params": { 00:15:04.419 "name": "Nvme1", 00:15:04.419 "trtype": "tcp", 00:15:04.419 "traddr": "10.0.0.2", 00:15:04.419 "adrfam": "ipv4", 00:15:04.419 "trsvcid": "4420", 00:15:04.419 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:04.419 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:04.419 "hdgst": false, 00:15:04.419 "ddgst": false 00:15:04.419 }, 00:15:04.419 "method": "bdev_nvme_attach_controller" 00:15:04.419 }' 00:15:04.419 04:11:06 -- nvmf/common.sh@545 -- # IFS=, 00:15:04.419 04:11:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:04.419 "params": { 00:15:04.419 "name": "Nvme1", 00:15:04.419 "trtype": "tcp", 00:15:04.419 "traddr": "10.0.0.2", 00:15:04.419 "adrfam": "ipv4", 00:15:04.419 "trsvcid": "4420", 00:15:04.419 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:04.419 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:04.419 "hdgst": false, 00:15:04.419 "ddgst": false 00:15:04.419 }, 00:15:04.419 "method": "bdev_nvme_attach_controller" 00:15:04.419 }' 00:15:04.419 04:11:06 -- nvmf/common.sh@544 -- # jq . 00:15:04.419 04:11:06 -- nvmf/common.sh@545 -- # IFS=, 00:15:04.419 04:11:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:04.419 "params": { 00:15:04.419 "name": "Nvme1", 00:15:04.419 "trtype": "tcp", 00:15:04.419 "traddr": "10.0.0.2", 00:15:04.419 "adrfam": "ipv4", 00:15:04.419 "trsvcid": "4420", 00:15:04.419 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:04.419 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:04.419 "hdgst": false, 00:15:04.419 "ddgst": false 00:15:04.419 }, 00:15:04.419 "method": "bdev_nvme_attach_controller" 00:15:04.419 }' 00:15:04.419 [2024-11-26 04:11:06.170537] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:04.419 [2024-11-26 04:11:06.170622] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:04.678 04:11:06 -- target/bdev_io_wait.sh@37 -- # wait 85035 00:15:04.678 [2024-11-26 04:11:06.190496] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:04.678 [2024-11-26 04:11:06.190574] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:04.678 [2024-11-26 04:11:06.190921] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:04.678 [2024-11-26 04:11:06.191001] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:04.678 [2024-11-26 04:11:06.204363] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:04.678 [2024-11-26 04:11:06.204454] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:04.678 [2024-11-26 04:11:06.382060] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.937 [2024-11-26 04:11:06.458141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.937 [2024-11-26 04:11:06.458979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:04.937 [2024-11-26 04:11:06.535545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.937 [2024-11-26 04:11:06.550081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:04.937 [2024-11-26 04:11:06.612882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.937 Running I/O for 1 seconds... 00:15:04.937 [2024-11-26 04:11:06.632237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:04.937 [2024-11-26 04:11:06.687829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:05.196 Running I/O for 1 seconds... 00:15:05.196 Running I/O for 1 seconds... 00:15:05.196 Running I/O for 1 seconds... 00:15:06.133 00:15:06.133 Latency(us) 00:15:06.133 [2024-11-26T04:11:07.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.133 [2024-11-26T04:11:07.901Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:06.133 Nvme1n1 : 1.00 222398.38 868.74 0.00 0.00 573.59 296.03 975.59 00:15:06.133 [2024-11-26T04:11:07.901Z] =================================================================================================================== 00:15:06.133 [2024-11-26T04:11:07.901Z] Total : 222398.38 868.74 0.00 0.00 573.59 296.03 975.59 00:15:06.133 00:15:06.133 Latency(us) 00:15:06.133 [2024-11-26T04:11:07.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.133 [2024-11-26T04:11:07.901Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:06.133 Nvme1n1 : 1.03 5334.05 20.84 0.00 0.00 23551.29 3693.85 42181.35 00:15:06.133 [2024-11-26T04:11:07.901Z] =================================================================================================================== 00:15:06.133 [2024-11-26T04:11:07.901Z] Total : 5334.05 20.84 0.00 0.00 23551.29 3693.85 42181.35 00:15:06.133 00:15:06.133 Latency(us) 00:15:06.133 [2024-11-26T04:11:07.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.133 [2024-11-26T04:11:07.902Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:06.134 Nvme1n1 : 1.01 4827.59 18.86 0.00 0.00 26406.25 7566.43 45994.36 00:15:06.134 [2024-11-26T04:11:07.902Z] =================================================================================================================== 00:15:06.134 [2024-11-26T04:11:07.902Z] Total : 4827.59 18.86 0.00 0.00 26406.25 7566.43 45994.36 00:15:06.134 00:15:06.134 Latency(us) 00:15:06.134 [2024-11-26T04:11:07.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.134 [2024-11-26T04:11:07.902Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:06.134 Nvme1n1 : 1.01 6680.34 26.10 0.00 0.00 19084.72 7179.17 32172.22 00:15:06.134 [2024-11-26T04:11:07.902Z] =================================================================================================================== 00:15:06.134 [2024-11-26T04:11:07.902Z] Total : 6680.34 26.10 0.00 0.00 19084.72 7179.17 32172.22 00:15:06.702 04:11:08 -- target/bdev_io_wait.sh@38 -- # wait 85037 00:15:06.702 04:11:08 -- target/bdev_io_wait.sh@39 -- # wait 85039 00:15:06.702 04:11:08 -- target/bdev_io_wait.sh@40 -- # wait 85042 00:15:06.702 04:11:08 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:06.702 04:11:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.702 04:11:08 -- common/autotest_common.sh@10 -- # set +x 00:15:06.702 04:11:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.702 04:11:08 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:06.702 04:11:08 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:06.702 04:11:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:06.702 04:11:08 -- nvmf/common.sh@116 -- # sync 00:15:06.702 04:11:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:06.702 04:11:08 -- nvmf/common.sh@119 -- # set +e 00:15:06.702 04:11:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:06.702 04:11:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:06.702 rmmod nvme_tcp 00:15:06.702 rmmod nvme_fabrics 00:15:06.702 rmmod nvme_keyring 00:15:06.702 04:11:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:06.702 04:11:08 -- nvmf/common.sh@123 -- # set -e 00:15:06.702 04:11:08 -- nvmf/common.sh@124 -- # return 0 00:15:06.702 04:11:08 -- nvmf/common.sh@477 -- # '[' -n 85000 ']' 00:15:06.702 04:11:08 -- nvmf/common.sh@478 -- # killprocess 85000 00:15:06.702 04:11:08 -- common/autotest_common.sh@936 -- # '[' -z 85000 ']' 00:15:06.702 04:11:08 -- common/autotest_common.sh@940 -- # kill -0 85000 00:15:06.702 04:11:08 -- common/autotest_common.sh@941 -- # uname 00:15:06.702 04:11:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:06.702 04:11:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85000 00:15:06.702 04:11:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:06.702 04:11:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:06.702 04:11:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85000' 00:15:06.702 killing process with pid 85000 00:15:06.702 04:11:08 -- common/autotest_common.sh@955 -- # kill 85000 00:15:06.702 04:11:08 -- common/autotest_common.sh@960 -- # wait 85000 00:15:06.962 04:11:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:06.962 04:11:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:06.962 04:11:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:06.962 04:11:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:06.962 04:11:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:06.962 04:11:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.962 04:11:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.962 04:11:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.962 04:11:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:06.962 00:15:06.962 real 0m3.624s 00:15:06.962 user 0m16.388s 00:15:06.962 sys 0m1.849s 00:15:06.962 04:11:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:06.962 04:11:08 -- common/autotest_common.sh@10 -- # set +x 00:15:06.962 ************************************ 00:15:06.962 END TEST nvmf_bdev_io_wait 00:15:06.962 ************************************ 00:15:06.962 04:11:08 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:06.962 04:11:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:06.962 04:11:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:06.962 04:11:08 -- common/autotest_common.sh@10 -- # set +x 00:15:06.962 ************************************ 00:15:06.962 START TEST nvmf_queue_depth 00:15:06.962 ************************************ 00:15:06.962 04:11:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:07.222 * Looking for test storage... 00:15:07.222 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:07.222 04:11:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:07.222 04:11:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:07.222 04:11:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:07.222 04:11:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:07.222 04:11:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:07.222 04:11:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:07.222 04:11:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:07.222 04:11:08 -- scripts/common.sh@335 -- # IFS=.-: 00:15:07.222 04:11:08 -- scripts/common.sh@335 -- # read -ra ver1 00:15:07.222 04:11:08 -- scripts/common.sh@336 -- # IFS=.-: 00:15:07.222 04:11:08 -- scripts/common.sh@336 -- # read -ra ver2 00:15:07.222 04:11:08 -- scripts/common.sh@337 -- # local 'op=<' 00:15:07.222 04:11:08 -- scripts/common.sh@339 -- # ver1_l=2 00:15:07.222 04:11:08 -- scripts/common.sh@340 -- # ver2_l=1 00:15:07.222 04:11:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:07.222 04:11:08 -- scripts/common.sh@343 -- # case "$op" in 00:15:07.222 04:11:08 -- scripts/common.sh@344 -- # : 1 00:15:07.222 04:11:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:07.222 04:11:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:07.222 04:11:08 -- scripts/common.sh@364 -- # decimal 1 00:15:07.222 04:11:08 -- scripts/common.sh@352 -- # local d=1 00:15:07.222 04:11:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:07.222 04:11:08 -- scripts/common.sh@354 -- # echo 1 00:15:07.222 04:11:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:07.222 04:11:08 -- scripts/common.sh@365 -- # decimal 2 00:15:07.222 04:11:08 -- scripts/common.sh@352 -- # local d=2 00:15:07.222 04:11:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:07.222 04:11:08 -- scripts/common.sh@354 -- # echo 2 00:15:07.222 04:11:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:07.222 04:11:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:07.222 04:11:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:07.222 04:11:08 -- scripts/common.sh@367 -- # return 0 00:15:07.222 04:11:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:07.222 04:11:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:07.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.222 --rc genhtml_branch_coverage=1 00:15:07.222 --rc genhtml_function_coverage=1 00:15:07.222 --rc genhtml_legend=1 00:15:07.222 --rc geninfo_all_blocks=1 00:15:07.222 --rc geninfo_unexecuted_blocks=1 00:15:07.222 00:15:07.222 ' 00:15:07.222 04:11:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:07.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.222 --rc genhtml_branch_coverage=1 00:15:07.222 --rc genhtml_function_coverage=1 00:15:07.222 --rc genhtml_legend=1 00:15:07.222 --rc geninfo_all_blocks=1 00:15:07.222 --rc geninfo_unexecuted_blocks=1 00:15:07.222 00:15:07.222 ' 00:15:07.222 04:11:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:07.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.222 --rc genhtml_branch_coverage=1 00:15:07.222 --rc genhtml_function_coverage=1 00:15:07.222 --rc genhtml_legend=1 00:15:07.222 --rc geninfo_all_blocks=1 00:15:07.222 --rc geninfo_unexecuted_blocks=1 00:15:07.222 00:15:07.222 ' 00:15:07.222 04:11:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:07.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.222 --rc genhtml_branch_coverage=1 00:15:07.222 --rc genhtml_function_coverage=1 00:15:07.222 --rc genhtml_legend=1 00:15:07.222 --rc geninfo_all_blocks=1 00:15:07.222 --rc geninfo_unexecuted_blocks=1 00:15:07.222 00:15:07.222 ' 00:15:07.222 04:11:08 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:07.222 04:11:08 -- nvmf/common.sh@7 -- # uname -s 00:15:07.222 04:11:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.222 04:11:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.222 04:11:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.222 04:11:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.222 04:11:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.222 04:11:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.222 04:11:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.222 04:11:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.222 04:11:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.222 04:11:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.222 04:11:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:15:07.222 04:11:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:15:07.222 04:11:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.222 04:11:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.222 04:11:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:07.222 04:11:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:07.222 04:11:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.222 04:11:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.222 04:11:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.222 04:11:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.222 04:11:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.222 04:11:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.222 04:11:08 -- paths/export.sh@5 -- # export PATH 00:15:07.222 04:11:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.222 04:11:08 -- nvmf/common.sh@46 -- # : 0 00:15:07.222 04:11:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:07.222 04:11:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:07.222 04:11:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:07.222 04:11:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.222 04:11:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.222 04:11:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:07.222 04:11:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:07.222 04:11:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:07.222 04:11:08 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:07.222 04:11:08 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:07.222 04:11:08 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:07.222 04:11:08 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:07.222 04:11:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:07.222 04:11:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.222 04:11:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:07.222 04:11:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:07.222 04:11:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:07.222 04:11:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.222 04:11:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.222 04:11:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.222 04:11:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:07.222 04:11:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:07.222 04:11:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:07.222 04:11:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:07.222 04:11:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:07.222 04:11:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:07.222 04:11:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:07.222 04:11:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:07.222 04:11:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:07.222 04:11:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:07.222 04:11:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:07.222 04:11:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:07.222 04:11:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:07.222 04:11:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:07.222 04:11:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:07.222 04:11:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:07.222 04:11:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:07.222 04:11:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:07.222 04:11:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:07.222 04:11:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:07.223 Cannot find device "nvmf_tgt_br" 00:15:07.223 04:11:08 -- nvmf/common.sh@154 -- # true 00:15:07.223 04:11:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:07.223 Cannot find device "nvmf_tgt_br2" 00:15:07.223 04:11:08 -- nvmf/common.sh@155 -- # true 00:15:07.223 04:11:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:07.223 04:11:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:07.482 Cannot find device "nvmf_tgt_br" 00:15:07.482 04:11:08 -- nvmf/common.sh@157 -- # true 00:15:07.482 04:11:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:07.482 Cannot find device "nvmf_tgt_br2" 00:15:07.482 04:11:08 -- nvmf/common.sh@158 -- # true 00:15:07.482 04:11:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:07.482 04:11:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:07.482 04:11:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:07.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:07.482 04:11:09 -- nvmf/common.sh@161 -- # true 00:15:07.482 04:11:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:07.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:07.482 04:11:09 -- nvmf/common.sh@162 -- # true 00:15:07.482 04:11:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:07.482 04:11:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:07.482 04:11:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:07.482 04:11:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:07.482 04:11:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:07.482 04:11:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:07.482 04:11:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:07.482 04:11:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:07.482 04:11:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:07.482 04:11:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:07.482 04:11:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:07.482 04:11:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:07.482 04:11:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:07.482 04:11:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:07.482 04:11:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:07.482 04:11:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:07.482 04:11:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:07.482 04:11:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:07.482 04:11:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:07.741 04:11:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:07.741 04:11:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:07.741 04:11:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:07.741 04:11:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:07.741 04:11:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:07.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:07.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:15:07.741 00:15:07.741 --- 10.0.0.2 ping statistics --- 00:15:07.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.741 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:15:07.741 04:11:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:07.741 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:07.741 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:15:07.741 00:15:07.741 --- 10.0.0.3 ping statistics --- 00:15:07.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.741 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:07.741 04:11:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:07.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:07.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:07.741 00:15:07.741 --- 10.0.0.1 ping statistics --- 00:15:07.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.741 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:07.741 04:11:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:07.741 04:11:09 -- nvmf/common.sh@421 -- # return 0 00:15:07.741 04:11:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:07.741 04:11:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:07.741 04:11:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:07.741 04:11:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:07.741 04:11:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:07.741 04:11:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:07.741 04:11:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:07.741 04:11:09 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:07.741 04:11:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:07.741 04:11:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:07.741 04:11:09 -- common/autotest_common.sh@10 -- # set +x 00:15:07.741 04:11:09 -- nvmf/common.sh@469 -- # nvmfpid=85260 00:15:07.741 04:11:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:07.741 04:11:09 -- nvmf/common.sh@470 -- # waitforlisten 85260 00:15:07.741 04:11:09 -- common/autotest_common.sh@829 -- # '[' -z 85260 ']' 00:15:07.741 04:11:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.741 04:11:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:07.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.741 04:11:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.741 04:11:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:07.741 04:11:09 -- common/autotest_common.sh@10 -- # set +x 00:15:07.741 [2024-11-26 04:11:09.362721] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:07.741 [2024-11-26 04:11:09.362814] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:07.741 [2024-11-26 04:11:09.487804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.000 [2024-11-26 04:11:09.546467] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:08.000 [2024-11-26 04:11:09.546620] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.000 [2024-11-26 04:11:09.546633] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.000 [2024-11-26 04:11:09.546641] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.000 [2024-11-26 04:11:09.546666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.936 04:11:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.936 04:11:10 -- common/autotest_common.sh@862 -- # return 0 00:15:08.936 04:11:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:08.936 04:11:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:08.936 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:15:08.936 04:11:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.936 04:11:10 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:08.936 04:11:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.936 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:15:08.936 [2024-11-26 04:11:10.454788] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.936 04:11:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.936 04:11:10 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:08.936 04:11:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.936 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:15:08.936 Malloc0 00:15:08.936 04:11:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.936 04:11:10 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:08.936 04:11:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.936 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:15:08.936 04:11:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.936 04:11:10 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:08.936 04:11:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.936 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:15:08.936 04:11:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.936 04:11:10 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.936 04:11:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.937 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:15:08.937 [2024-11-26 04:11:10.514488] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.937 04:11:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.937 04:11:10 -- target/queue_depth.sh@30 -- # bdevperf_pid=85310 00:15:08.937 04:11:10 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:08.937 04:11:10 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:08.937 04:11:10 -- target/queue_depth.sh@33 -- # waitforlisten 85310 /var/tmp/bdevperf.sock 00:15:08.937 04:11:10 -- common/autotest_common.sh@829 -- # '[' -z 85310 ']' 00:15:08.937 04:11:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:08.937 04:11:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:08.937 04:11:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:08.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:08.937 04:11:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:08.937 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:15:08.937 [2024-11-26 04:11:10.574286] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:08.937 [2024-11-26 04:11:10.574395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85310 ] 00:15:09.196 [2024-11-26 04:11:10.715694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.196 [2024-11-26 04:11:10.809792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.763 04:11:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:09.763 04:11:11 -- common/autotest_common.sh@862 -- # return 0 00:15:09.763 04:11:11 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:09.763 04:11:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.763 04:11:11 -- common/autotest_common.sh@10 -- # set +x 00:15:10.022 NVMe0n1 00:15:10.022 04:11:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.022 04:11:11 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:10.022 Running I/O for 10 seconds... 00:15:19.999 00:15:19.999 Latency(us) 00:15:19.999 [2024-11-26T04:11:21.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.999 [2024-11-26T04:11:21.767Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:19.999 Verification LBA range: start 0x0 length 0x4000 00:15:19.999 NVMe0n1 : 10.05 17033.60 66.54 0.00 0.00 59928.97 12332.68 48615.80 00:15:19.999 [2024-11-26T04:11:21.767Z] =================================================================================================================== 00:15:19.999 [2024-11-26T04:11:21.767Z] Total : 17033.60 66.54 0.00 0.00 59928.97 12332.68 48615.80 00:15:19.999 0 00:15:19.999 04:11:21 -- target/queue_depth.sh@39 -- # killprocess 85310 00:15:19.999 04:11:21 -- common/autotest_common.sh@936 -- # '[' -z 85310 ']' 00:15:19.999 04:11:21 -- common/autotest_common.sh@940 -- # kill -0 85310 00:15:19.999 04:11:21 -- common/autotest_common.sh@941 -- # uname 00:15:20.258 04:11:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:20.258 04:11:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85310 00:15:20.258 killing process with pid 85310 00:15:20.258 Received shutdown signal, test time was about 10.000000 seconds 00:15:20.258 00:15:20.258 Latency(us) 00:15:20.258 [2024-11-26T04:11:22.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.258 [2024-11-26T04:11:22.026Z] =================================================================================================================== 00:15:20.258 [2024-11-26T04:11:22.026Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:20.258 04:11:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:20.258 04:11:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:20.258 04:11:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85310' 00:15:20.258 04:11:21 -- common/autotest_common.sh@955 -- # kill 85310 00:15:20.258 04:11:21 -- common/autotest_common.sh@960 -- # wait 85310 00:15:20.516 04:11:22 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:20.516 04:11:22 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:20.516 04:11:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:20.516 04:11:22 -- nvmf/common.sh@116 -- # sync 00:15:20.516 04:11:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:20.516 04:11:22 -- nvmf/common.sh@119 -- # set +e 00:15:20.516 04:11:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:20.516 04:11:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:20.516 rmmod nvme_tcp 00:15:20.516 rmmod nvme_fabrics 00:15:20.516 rmmod nvme_keyring 00:15:20.516 04:11:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:20.516 04:11:22 -- nvmf/common.sh@123 -- # set -e 00:15:20.516 04:11:22 -- nvmf/common.sh@124 -- # return 0 00:15:20.516 04:11:22 -- nvmf/common.sh@477 -- # '[' -n 85260 ']' 00:15:20.516 04:11:22 -- nvmf/common.sh@478 -- # killprocess 85260 00:15:20.516 04:11:22 -- common/autotest_common.sh@936 -- # '[' -z 85260 ']' 00:15:20.516 04:11:22 -- common/autotest_common.sh@940 -- # kill -0 85260 00:15:20.516 04:11:22 -- common/autotest_common.sh@941 -- # uname 00:15:20.517 04:11:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:20.517 04:11:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85260 00:15:20.517 killing process with pid 85260 00:15:20.517 04:11:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:20.517 04:11:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:20.517 04:11:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85260' 00:15:20.517 04:11:22 -- common/autotest_common.sh@955 -- # kill 85260 00:15:20.517 04:11:22 -- common/autotest_common.sh@960 -- # wait 85260 00:15:20.775 04:11:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:20.775 04:11:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:20.775 04:11:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:20.775 04:11:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:20.775 04:11:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:20.775 04:11:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.776 04:11:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.776 04:11:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.776 04:11:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:20.776 00:15:20.776 real 0m13.716s 00:15:20.776 user 0m22.609s 00:15:20.776 sys 0m2.656s 00:15:20.776 04:11:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:20.776 04:11:22 -- common/autotest_common.sh@10 -- # set +x 00:15:20.776 ************************************ 00:15:20.776 END TEST nvmf_queue_depth 00:15:20.776 ************************************ 00:15:20.776 04:11:22 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:20.776 04:11:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:20.776 04:11:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:20.776 04:11:22 -- common/autotest_common.sh@10 -- # set +x 00:15:20.776 ************************************ 00:15:20.776 START TEST nvmf_multipath 00:15:20.776 ************************************ 00:15:20.776 04:11:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:21.039 * Looking for test storage... 00:15:21.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:21.039 04:11:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:21.039 04:11:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:21.039 04:11:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:21.039 04:11:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:21.039 04:11:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:21.039 04:11:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:21.039 04:11:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:21.039 04:11:22 -- scripts/common.sh@335 -- # IFS=.-: 00:15:21.039 04:11:22 -- scripts/common.sh@335 -- # read -ra ver1 00:15:21.040 04:11:22 -- scripts/common.sh@336 -- # IFS=.-: 00:15:21.040 04:11:22 -- scripts/common.sh@336 -- # read -ra ver2 00:15:21.040 04:11:22 -- scripts/common.sh@337 -- # local 'op=<' 00:15:21.040 04:11:22 -- scripts/common.sh@339 -- # ver1_l=2 00:15:21.040 04:11:22 -- scripts/common.sh@340 -- # ver2_l=1 00:15:21.040 04:11:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:21.040 04:11:22 -- scripts/common.sh@343 -- # case "$op" in 00:15:21.040 04:11:22 -- scripts/common.sh@344 -- # : 1 00:15:21.040 04:11:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:21.040 04:11:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:21.040 04:11:22 -- scripts/common.sh@364 -- # decimal 1 00:15:21.040 04:11:22 -- scripts/common.sh@352 -- # local d=1 00:15:21.040 04:11:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:21.040 04:11:22 -- scripts/common.sh@354 -- # echo 1 00:15:21.040 04:11:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:21.040 04:11:22 -- scripts/common.sh@365 -- # decimal 2 00:15:21.040 04:11:22 -- scripts/common.sh@352 -- # local d=2 00:15:21.040 04:11:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:21.040 04:11:22 -- scripts/common.sh@354 -- # echo 2 00:15:21.040 04:11:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:21.040 04:11:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:21.040 04:11:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:21.040 04:11:22 -- scripts/common.sh@367 -- # return 0 00:15:21.040 04:11:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:21.040 04:11:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:21.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.040 --rc genhtml_branch_coverage=1 00:15:21.040 --rc genhtml_function_coverage=1 00:15:21.040 --rc genhtml_legend=1 00:15:21.040 --rc geninfo_all_blocks=1 00:15:21.040 --rc geninfo_unexecuted_blocks=1 00:15:21.040 00:15:21.040 ' 00:15:21.040 04:11:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:21.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.040 --rc genhtml_branch_coverage=1 00:15:21.040 --rc genhtml_function_coverage=1 00:15:21.040 --rc genhtml_legend=1 00:15:21.040 --rc geninfo_all_blocks=1 00:15:21.040 --rc geninfo_unexecuted_blocks=1 00:15:21.040 00:15:21.040 ' 00:15:21.040 04:11:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:21.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.040 --rc genhtml_branch_coverage=1 00:15:21.040 --rc genhtml_function_coverage=1 00:15:21.040 --rc genhtml_legend=1 00:15:21.040 --rc geninfo_all_blocks=1 00:15:21.040 --rc geninfo_unexecuted_blocks=1 00:15:21.040 00:15:21.040 ' 00:15:21.040 04:11:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:21.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.040 --rc genhtml_branch_coverage=1 00:15:21.040 --rc genhtml_function_coverage=1 00:15:21.040 --rc genhtml_legend=1 00:15:21.040 --rc geninfo_all_blocks=1 00:15:21.040 --rc geninfo_unexecuted_blocks=1 00:15:21.040 00:15:21.040 ' 00:15:21.040 04:11:22 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:21.040 04:11:22 -- nvmf/common.sh@7 -- # uname -s 00:15:21.040 04:11:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.040 04:11:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.040 04:11:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.040 04:11:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.040 04:11:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.040 04:11:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.040 04:11:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.040 04:11:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.040 04:11:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.040 04:11:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.040 04:11:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:15:21.040 04:11:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:15:21.040 04:11:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.040 04:11:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.040 04:11:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:21.040 04:11:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:21.040 04:11:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.040 04:11:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.040 04:11:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.040 04:11:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.040 04:11:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.040 04:11:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.040 04:11:22 -- paths/export.sh@5 -- # export PATH 00:15:21.040 04:11:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.040 04:11:22 -- nvmf/common.sh@46 -- # : 0 00:15:21.040 04:11:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:21.040 04:11:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:21.040 04:11:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:21.040 04:11:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.040 04:11:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.040 04:11:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:21.040 04:11:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:21.040 04:11:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:21.040 04:11:22 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:21.040 04:11:22 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:21.040 04:11:22 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:21.040 04:11:22 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:21.040 04:11:22 -- target/multipath.sh@43 -- # nvmftestinit 00:15:21.040 04:11:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:21.040 04:11:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.040 04:11:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:21.040 04:11:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:21.040 04:11:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:21.040 04:11:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.040 04:11:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.040 04:11:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.040 04:11:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:21.040 04:11:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:21.040 04:11:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:21.040 04:11:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:21.040 04:11:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:21.040 04:11:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:21.040 04:11:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:21.040 04:11:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:21.040 04:11:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:21.040 04:11:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:21.040 04:11:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:21.040 04:11:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:21.040 04:11:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:21.040 04:11:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:21.040 04:11:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:21.040 04:11:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:21.040 04:11:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:21.040 04:11:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:21.040 04:11:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:21.040 04:11:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:21.040 Cannot find device "nvmf_tgt_br" 00:15:21.040 04:11:22 -- nvmf/common.sh@154 -- # true 00:15:21.040 04:11:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:21.040 Cannot find device "nvmf_tgt_br2" 00:15:21.040 04:11:22 -- nvmf/common.sh@155 -- # true 00:15:21.040 04:11:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:21.040 04:11:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:21.040 Cannot find device "nvmf_tgt_br" 00:15:21.040 04:11:22 -- nvmf/common.sh@157 -- # true 00:15:21.040 04:11:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:21.040 Cannot find device "nvmf_tgt_br2" 00:15:21.040 04:11:22 -- nvmf/common.sh@158 -- # true 00:15:21.040 04:11:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:21.321 04:11:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:21.321 04:11:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:21.321 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:21.321 04:11:22 -- nvmf/common.sh@161 -- # true 00:15:21.321 04:11:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:21.321 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:21.321 04:11:22 -- nvmf/common.sh@162 -- # true 00:15:21.321 04:11:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:21.321 04:11:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:21.321 04:11:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:21.321 04:11:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:21.321 04:11:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:21.321 04:11:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:21.321 04:11:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:21.321 04:11:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:21.321 04:11:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:21.321 04:11:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:21.321 04:11:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:21.321 04:11:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:21.321 04:11:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:21.321 04:11:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:21.321 04:11:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:21.321 04:11:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:21.321 04:11:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:21.321 04:11:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:21.321 04:11:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:21.321 04:11:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:21.321 04:11:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:21.321 04:11:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:21.321 04:11:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:21.321 04:11:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:21.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:21.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:15:21.321 00:15:21.321 --- 10.0.0.2 ping statistics --- 00:15:21.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.321 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:15:21.321 04:11:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:21.321 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:21.321 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:15:21.321 00:15:21.321 --- 10.0.0.3 ping statistics --- 00:15:21.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.321 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:15:21.321 04:11:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:21.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:21.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:21.321 00:15:21.321 --- 10.0.0.1 ping statistics --- 00:15:21.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.322 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:21.322 04:11:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:21.322 04:11:23 -- nvmf/common.sh@421 -- # return 0 00:15:21.322 04:11:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:21.322 04:11:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:21.322 04:11:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:21.322 04:11:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:21.322 04:11:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:21.322 04:11:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:21.322 04:11:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:21.322 04:11:23 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:21.322 04:11:23 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:21.322 04:11:23 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:21.322 04:11:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:21.322 04:11:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:21.322 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:15:21.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.593 04:11:23 -- nvmf/common.sh@469 -- # nvmfpid=85651 00:15:21.593 04:11:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:21.593 04:11:23 -- nvmf/common.sh@470 -- # waitforlisten 85651 00:15:21.593 04:11:23 -- common/autotest_common.sh@829 -- # '[' -z 85651 ']' 00:15:21.593 04:11:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.593 04:11:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:21.593 04:11:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.593 04:11:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:21.593 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:15:21.593 [2024-11-26 04:11:23.119690] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:21.593 [2024-11-26 04:11:23.119763] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.593 [2024-11-26 04:11:23.249373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:21.593 [2024-11-26 04:11:23.320234] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:21.593 [2024-11-26 04:11:23.320389] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:21.593 [2024-11-26 04:11:23.320403] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:21.593 [2024-11-26 04:11:23.320411] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:21.593 [2024-11-26 04:11:23.320584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.593 [2024-11-26 04:11:23.320897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:21.593 [2024-11-26 04:11:23.321613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:21.593 [2024-11-26 04:11:23.321672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.531 04:11:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:22.531 04:11:24 -- common/autotest_common.sh@862 -- # return 0 00:15:22.531 04:11:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:22.531 04:11:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:22.531 04:11:24 -- common/autotest_common.sh@10 -- # set +x 00:15:22.531 04:11:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.531 04:11:24 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:22.790 [2024-11-26 04:11:24.322894] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:22.790 04:11:24 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:23.049 Malloc0 00:15:23.049 04:11:24 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:23.049 04:11:24 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:23.308 04:11:25 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.567 [2024-11-26 04:11:25.216442] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.567 04:11:25 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:23.825 [2024-11-26 04:11:25.428692] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:23.825 04:11:25 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:24.084 04:11:25 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:24.343 04:11:25 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:24.343 04:11:25 -- common/autotest_common.sh@1187 -- # local i=0 00:15:24.343 04:11:25 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:24.343 04:11:25 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:24.343 04:11:25 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:26.247 04:11:27 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:26.247 04:11:27 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:26.247 04:11:27 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:26.247 04:11:27 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:26.247 04:11:27 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:26.247 04:11:27 -- common/autotest_common.sh@1197 -- # return 0 00:15:26.247 04:11:27 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:26.247 04:11:27 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:26.247 04:11:27 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:26.247 04:11:27 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:26.247 04:11:27 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:26.247 04:11:27 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:26.247 04:11:27 -- target/multipath.sh@38 -- # return 0 00:15:26.247 04:11:27 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:26.247 04:11:27 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:26.247 04:11:27 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:26.247 04:11:27 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:26.247 04:11:27 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:26.247 04:11:27 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:26.247 04:11:27 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:26.247 04:11:27 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:26.247 04:11:27 -- target/multipath.sh@22 -- # local timeout=20 00:15:26.247 04:11:27 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:26.247 04:11:27 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:26.247 04:11:27 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:26.247 04:11:27 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:26.247 04:11:27 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:26.247 04:11:27 -- target/multipath.sh@22 -- # local timeout=20 00:15:26.247 04:11:27 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:26.247 04:11:27 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:26.247 04:11:27 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:26.247 04:11:27 -- target/multipath.sh@85 -- # echo numa 00:15:26.247 04:11:27 -- target/multipath.sh@88 -- # fio_pid=85787 00:15:26.247 04:11:27 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:26.247 04:11:27 -- target/multipath.sh@90 -- # sleep 1 00:15:26.247 [global] 00:15:26.247 thread=1 00:15:26.247 invalidate=1 00:15:26.247 rw=randrw 00:15:26.247 time_based=1 00:15:26.247 runtime=6 00:15:26.247 ioengine=libaio 00:15:26.247 direct=1 00:15:26.247 bs=4096 00:15:26.247 iodepth=128 00:15:26.247 norandommap=0 00:15:26.247 numjobs=1 00:15:26.247 00:15:26.247 verify_dump=1 00:15:26.247 verify_backlog=512 00:15:26.247 verify_state_save=0 00:15:26.247 do_verify=1 00:15:26.247 verify=crc32c-intel 00:15:26.247 [job0] 00:15:26.247 filename=/dev/nvme0n1 00:15:26.247 Could not set queue depth (nvme0n1) 00:15:26.507 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:26.507 fio-3.35 00:15:26.507 Starting 1 thread 00:15:27.445 04:11:28 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:27.445 04:11:29 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:27.703 04:11:29 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:27.703 04:11:29 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:27.703 04:11:29 -- target/multipath.sh@22 -- # local timeout=20 00:15:27.703 04:11:29 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:27.703 04:11:29 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:27.703 04:11:29 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:27.703 04:11:29 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:27.703 04:11:29 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:27.704 04:11:29 -- target/multipath.sh@22 -- # local timeout=20 00:15:27.704 04:11:29 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:27.704 04:11:29 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:27.704 04:11:29 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:27.704 04:11:29 -- target/multipath.sh@25 -- # sleep 1s 00:15:29.082 04:11:30 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:29.082 04:11:30 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:29.082 04:11:30 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:29.082 04:11:30 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:29.082 04:11:30 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:29.341 04:11:30 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:29.341 04:11:30 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:29.341 04:11:30 -- target/multipath.sh@22 -- # local timeout=20 00:15:29.341 04:11:30 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:29.341 04:11:30 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:29.341 04:11:30 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:29.341 04:11:30 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:29.341 04:11:30 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:29.341 04:11:30 -- target/multipath.sh@22 -- # local timeout=20 00:15:29.341 04:11:30 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:29.341 04:11:30 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:29.341 04:11:30 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:29.341 04:11:30 -- target/multipath.sh@25 -- # sleep 1s 00:15:30.279 04:11:31 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:30.279 04:11:31 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:30.279 04:11:31 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:30.279 04:11:31 -- target/multipath.sh@104 -- # wait 85787 00:15:32.813 00:15:32.813 job0: (groupid=0, jobs=1): err= 0: pid=85815: Tue Nov 26 04:11:34 2024 00:15:32.813 read: IOPS=13.2k, BW=51.7MiB/s (54.2MB/s)(311MiB/6005msec) 00:15:32.813 slat (usec): min=5, max=4485, avg=43.34, stdev=196.41 00:15:32.813 clat (usec): min=1036, max=13799, avg=6658.67, stdev=984.42 00:15:32.814 lat (usec): min=1143, max=13806, avg=6702.01, stdev=994.25 00:15:32.814 clat percentiles (usec): 00:15:32.814 | 1.00th=[ 4113], 5.00th=[ 5276], 10.00th=[ 5669], 20.00th=[ 5997], 00:15:32.814 | 30.00th=[ 6128], 40.00th=[ 6325], 50.00th=[ 6587], 60.00th=[ 6849], 00:15:32.814 | 70.00th=[ 7046], 80.00th=[ 7308], 90.00th=[ 7767], 95.00th=[ 8225], 00:15:32.814 | 99.00th=[ 9765], 99.50th=[10159], 99.90th=[11076], 99.95th=[11338], 00:15:32.814 | 99.99th=[12256] 00:15:32.814 bw ( KiB/s): min= 9160, max=34960, per=52.81%, avg=27965.82, stdev=8375.21, samples=11 00:15:32.814 iops : min= 2290, max= 8740, avg=6991.45, stdev=2093.80, samples=11 00:15:32.814 write: IOPS=7886, BW=30.8MiB/s (32.3MB/s)(157MiB/5085msec); 0 zone resets 00:15:32.814 slat (usec): min=11, max=4366, avg=54.30, stdev=131.87 00:15:32.814 clat (usec): min=498, max=13195, avg=5790.81, stdev=877.50 00:15:32.814 lat (usec): min=550, max=13304, avg=5845.11, stdev=880.15 00:15:32.814 clat percentiles (usec): 00:15:32.814 | 1.00th=[ 3195], 5.00th=[ 4228], 10.00th=[ 4883], 20.00th=[ 5276], 00:15:32.814 | 30.00th=[ 5473], 40.00th=[ 5669], 50.00th=[ 5866], 60.00th=[ 5997], 00:15:32.814 | 70.00th=[ 6128], 80.00th=[ 6325], 90.00th=[ 6587], 95.00th=[ 6915], 00:15:32.814 | 99.00th=[ 8586], 99.50th=[ 9110], 99.90th=[10421], 99.95th=[11994], 00:15:32.814 | 99.99th=[13042] 00:15:32.814 bw ( KiB/s): min= 9328, max=34232, per=88.57%, avg=27942.55, stdev=8063.75, samples=11 00:15:32.814 iops : min= 2332, max= 8558, avg=6985.64, stdev=2015.94, samples=11 00:15:32.814 lat (usec) : 500=0.01%, 1000=0.01% 00:15:32.814 lat (msec) : 2=0.03%, 4=1.90%, 10=97.53%, 20=0.55% 00:15:32.814 cpu : usr=6.06%, sys=23.91%, ctx=7131, majf=0, minf=151 00:15:32.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:32.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:32.814 issued rwts: total=79496,40105,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.814 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:32.814 00:15:32.814 Run status group 0 (all jobs): 00:15:32.814 READ: bw=51.7MiB/s (54.2MB/s), 51.7MiB/s-51.7MiB/s (54.2MB/s-54.2MB/s), io=311MiB (326MB), run=6005-6005msec 00:15:32.814 WRITE: bw=30.8MiB/s (32.3MB/s), 30.8MiB/s-30.8MiB/s (32.3MB/s-32.3MB/s), io=157MiB (164MB), run=5085-5085msec 00:15:32.814 00:15:32.814 Disk stats (read/write): 00:15:32.814 nvme0n1: ios=78432/39308, merge=0/0, ticks=487432/210933, in_queue=698365, util=98.56% 00:15:32.814 04:11:34 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:32.814 04:11:34 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:33.072 04:11:34 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:33.072 04:11:34 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:33.072 04:11:34 -- target/multipath.sh@22 -- # local timeout=20 00:15:33.072 04:11:34 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:33.072 04:11:34 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:33.072 04:11:34 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:33.072 04:11:34 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:33.072 04:11:34 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:33.072 04:11:34 -- target/multipath.sh@22 -- # local timeout=20 00:15:33.072 04:11:34 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:33.072 04:11:34 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:33.072 04:11:34 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:33.072 04:11:34 -- target/multipath.sh@25 -- # sleep 1s 00:15:34.006 04:11:35 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:34.006 04:11:35 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:34.006 04:11:35 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:34.006 04:11:35 -- target/multipath.sh@113 -- # echo round-robin 00:15:34.006 04:11:35 -- target/multipath.sh@116 -- # fio_pid=85940 00:15:34.006 04:11:35 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:34.006 04:11:35 -- target/multipath.sh@118 -- # sleep 1 00:15:34.265 [global] 00:15:34.265 thread=1 00:15:34.265 invalidate=1 00:15:34.265 rw=randrw 00:15:34.265 time_based=1 00:15:34.265 runtime=6 00:15:34.265 ioengine=libaio 00:15:34.265 direct=1 00:15:34.265 bs=4096 00:15:34.265 iodepth=128 00:15:34.265 norandommap=0 00:15:34.265 numjobs=1 00:15:34.265 00:15:34.265 verify_dump=1 00:15:34.266 verify_backlog=512 00:15:34.266 verify_state_save=0 00:15:34.266 do_verify=1 00:15:34.266 verify=crc32c-intel 00:15:34.266 [job0] 00:15:34.266 filename=/dev/nvme0n1 00:15:34.266 Could not set queue depth (nvme0n1) 00:15:34.266 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:34.266 fio-3.35 00:15:34.266 Starting 1 thread 00:15:35.202 04:11:36 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:35.460 04:11:37 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:35.719 04:11:37 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:35.719 04:11:37 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:35.719 04:11:37 -- target/multipath.sh@22 -- # local timeout=20 00:15:35.719 04:11:37 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:35.719 04:11:37 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:35.719 04:11:37 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:35.719 04:11:37 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:35.719 04:11:37 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:35.719 04:11:37 -- target/multipath.sh@22 -- # local timeout=20 00:15:35.719 04:11:37 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:35.719 04:11:37 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:35.719 04:11:37 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:35.719 04:11:37 -- target/multipath.sh@25 -- # sleep 1s 00:15:36.657 04:11:38 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:36.657 04:11:38 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:36.657 04:11:38 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:36.657 04:11:38 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:36.916 04:11:38 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:37.175 04:11:38 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:37.175 04:11:38 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:37.175 04:11:38 -- target/multipath.sh@22 -- # local timeout=20 00:15:37.175 04:11:38 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:37.175 04:11:38 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:37.175 04:11:38 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:37.175 04:11:38 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:37.175 04:11:38 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:37.175 04:11:38 -- target/multipath.sh@22 -- # local timeout=20 00:15:37.175 04:11:38 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:37.175 04:11:38 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:37.175 04:11:38 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:37.175 04:11:38 -- target/multipath.sh@25 -- # sleep 1s 00:15:38.110 04:11:39 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:38.110 04:11:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:38.110 04:11:39 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:38.110 04:11:39 -- target/multipath.sh@132 -- # wait 85940 00:15:40.645 00:15:40.645 job0: (groupid=0, jobs=1): err= 0: pid=85965: Tue Nov 26 04:11:42 2024 00:15:40.645 read: IOPS=13.4k, BW=52.3MiB/s (54.8MB/s)(314MiB/6002msec) 00:15:40.645 slat (nsec): min=1818, max=6632.4k, avg=38179.85, stdev=183983.91 00:15:40.645 clat (usec): min=377, max=13361, avg=6613.30, stdev=1138.19 00:15:40.645 lat (usec): min=390, max=13395, avg=6651.48, stdev=1145.45 00:15:40.645 clat percentiles (usec): 00:15:40.645 | 1.00th=[ 3720], 5.00th=[ 4883], 10.00th=[ 5407], 20.00th=[ 5866], 00:15:40.645 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6521], 60.00th=[ 6783], 00:15:40.645 | 70.00th=[ 7111], 80.00th=[ 7439], 90.00th=[ 7963], 95.00th=[ 8586], 00:15:40.645 | 99.00th=[ 9765], 99.50th=[10159], 99.90th=[11207], 99.95th=[11600], 00:15:40.645 | 99.99th=[12256] 00:15:40.645 bw ( KiB/s): min= 9720, max=36184, per=52.15%, avg=27928.82, stdev=8284.65, samples=11 00:15:40.645 iops : min= 2430, max= 9046, avg=6982.18, stdev=2071.14, samples=11 00:15:40.645 write: IOPS=7931, BW=31.0MiB/s (32.5MB/s)(160MiB/5173msec); 0 zone resets 00:15:40.645 slat (usec): min=2, max=2052, avg=48.53, stdev=122.36 00:15:40.645 clat (usec): min=234, max=11672, avg=5608.75, stdev=1034.73 00:15:40.645 lat (usec): min=262, max=11697, avg=5657.27, stdev=1040.29 00:15:40.645 clat percentiles (usec): 00:15:40.645 | 1.00th=[ 2900], 5.00th=[ 3621], 10.00th=[ 4146], 20.00th=[ 4883], 00:15:40.645 | 30.00th=[ 5276], 40.00th=[ 5538], 50.00th=[ 5735], 60.00th=[ 5932], 00:15:40.645 | 70.00th=[ 6128], 80.00th=[ 6325], 90.00th=[ 6652], 95.00th=[ 6980], 00:15:40.645 | 99.00th=[ 8225], 99.50th=[ 8848], 99.90th=[10028], 99.95th=[10290], 00:15:40.645 | 99.99th=[11469] 00:15:40.645 bw ( KiB/s): min=10008, max=35536, per=87.98%, avg=27913.00, stdev=8025.57, samples=11 00:15:40.645 iops : min= 2502, max= 8884, avg=6978.18, stdev=2006.33, samples=11 00:15:40.645 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:15:40.645 lat (msec) : 2=0.13%, 4=3.78%, 10=95.59%, 20=0.47% 00:15:40.645 cpu : usr=5.80%, sys=22.78%, ctx=7321, majf=0, minf=127 00:15:40.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:40.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:40.645 issued rwts: total=80358,41031,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.645 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:40.645 00:15:40.645 Run status group 0 (all jobs): 00:15:40.645 READ: bw=52.3MiB/s (54.8MB/s), 52.3MiB/s-52.3MiB/s (54.8MB/s-54.8MB/s), io=314MiB (329MB), run=6002-6002msec 00:15:40.645 WRITE: bw=31.0MiB/s (32.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=160MiB (168MB), run=5173-5173msec 00:15:40.645 00:15:40.645 Disk stats (read/write): 00:15:40.645 nvme0n1: ios=79380/40182, merge=0/0, ticks=491974/209447, in_queue=701421, util=98.63% 00:15:40.645 04:11:42 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:40.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:40.645 04:11:42 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:40.645 04:11:42 -- common/autotest_common.sh@1208 -- # local i=0 00:15:40.645 04:11:42 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:40.645 04:11:42 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:40.645 04:11:42 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:40.645 04:11:42 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:40.645 04:11:42 -- common/autotest_common.sh@1220 -- # return 0 00:15:40.645 04:11:42 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:40.905 04:11:42 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:40.905 04:11:42 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:40.905 04:11:42 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:40.905 04:11:42 -- target/multipath.sh@144 -- # nvmftestfini 00:15:40.905 04:11:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:40.905 04:11:42 -- nvmf/common.sh@116 -- # sync 00:15:40.905 04:11:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:40.905 04:11:42 -- nvmf/common.sh@119 -- # set +e 00:15:40.905 04:11:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:40.905 04:11:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:40.905 rmmod nvme_tcp 00:15:40.905 rmmod nvme_fabrics 00:15:40.905 rmmod nvme_keyring 00:15:40.905 04:11:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:41.165 04:11:42 -- nvmf/common.sh@123 -- # set -e 00:15:41.165 04:11:42 -- nvmf/common.sh@124 -- # return 0 00:15:41.165 04:11:42 -- nvmf/common.sh@477 -- # '[' -n 85651 ']' 00:15:41.165 04:11:42 -- nvmf/common.sh@478 -- # killprocess 85651 00:15:41.165 04:11:42 -- common/autotest_common.sh@936 -- # '[' -z 85651 ']' 00:15:41.165 04:11:42 -- common/autotest_common.sh@940 -- # kill -0 85651 00:15:41.165 04:11:42 -- common/autotest_common.sh@941 -- # uname 00:15:41.165 04:11:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:41.165 04:11:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85651 00:15:41.165 killing process with pid 85651 00:15:41.165 04:11:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:41.165 04:11:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:41.165 04:11:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85651' 00:15:41.165 04:11:42 -- common/autotest_common.sh@955 -- # kill 85651 00:15:41.165 04:11:42 -- common/autotest_common.sh@960 -- # wait 85651 00:15:41.424 04:11:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:41.424 04:11:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:41.424 04:11:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:41.424 04:11:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:41.424 04:11:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:41.424 04:11:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.424 04:11:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.424 04:11:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.424 04:11:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:41.424 00:15:41.424 real 0m20.559s 00:15:41.424 user 1m20.042s 00:15:41.424 sys 0m6.353s 00:15:41.424 04:11:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:41.424 04:11:43 -- common/autotest_common.sh@10 -- # set +x 00:15:41.424 ************************************ 00:15:41.424 END TEST nvmf_multipath 00:15:41.424 ************************************ 00:15:41.424 04:11:43 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:41.424 04:11:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:41.424 04:11:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:41.424 04:11:43 -- common/autotest_common.sh@10 -- # set +x 00:15:41.424 ************************************ 00:15:41.424 START TEST nvmf_zcopy 00:15:41.424 ************************************ 00:15:41.424 04:11:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:41.424 * Looking for test storage... 00:15:41.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:41.424 04:11:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:41.424 04:11:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:41.424 04:11:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:41.683 04:11:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:41.683 04:11:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:41.683 04:11:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:41.683 04:11:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:41.683 04:11:43 -- scripts/common.sh@335 -- # IFS=.-: 00:15:41.683 04:11:43 -- scripts/common.sh@335 -- # read -ra ver1 00:15:41.683 04:11:43 -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.683 04:11:43 -- scripts/common.sh@336 -- # read -ra ver2 00:15:41.683 04:11:43 -- scripts/common.sh@337 -- # local 'op=<' 00:15:41.683 04:11:43 -- scripts/common.sh@339 -- # ver1_l=2 00:15:41.683 04:11:43 -- scripts/common.sh@340 -- # ver2_l=1 00:15:41.683 04:11:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:41.683 04:11:43 -- scripts/common.sh@343 -- # case "$op" in 00:15:41.683 04:11:43 -- scripts/common.sh@344 -- # : 1 00:15:41.683 04:11:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:41.683 04:11:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.683 04:11:43 -- scripts/common.sh@364 -- # decimal 1 00:15:41.683 04:11:43 -- scripts/common.sh@352 -- # local d=1 00:15:41.683 04:11:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.683 04:11:43 -- scripts/common.sh@354 -- # echo 1 00:15:41.683 04:11:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:41.683 04:11:43 -- scripts/common.sh@365 -- # decimal 2 00:15:41.683 04:11:43 -- scripts/common.sh@352 -- # local d=2 00:15:41.683 04:11:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.683 04:11:43 -- scripts/common.sh@354 -- # echo 2 00:15:41.683 04:11:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:41.683 04:11:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:41.683 04:11:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:41.683 04:11:43 -- scripts/common.sh@367 -- # return 0 00:15:41.683 04:11:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.683 04:11:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:41.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.683 --rc genhtml_branch_coverage=1 00:15:41.683 --rc genhtml_function_coverage=1 00:15:41.683 --rc genhtml_legend=1 00:15:41.684 --rc geninfo_all_blocks=1 00:15:41.684 --rc geninfo_unexecuted_blocks=1 00:15:41.684 00:15:41.684 ' 00:15:41.684 04:11:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:41.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.684 --rc genhtml_branch_coverage=1 00:15:41.684 --rc genhtml_function_coverage=1 00:15:41.684 --rc genhtml_legend=1 00:15:41.684 --rc geninfo_all_blocks=1 00:15:41.684 --rc geninfo_unexecuted_blocks=1 00:15:41.684 00:15:41.684 ' 00:15:41.684 04:11:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:41.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.684 --rc genhtml_branch_coverage=1 00:15:41.684 --rc genhtml_function_coverage=1 00:15:41.684 --rc genhtml_legend=1 00:15:41.684 --rc geninfo_all_blocks=1 00:15:41.684 --rc geninfo_unexecuted_blocks=1 00:15:41.684 00:15:41.684 ' 00:15:41.684 04:11:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:41.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.684 --rc genhtml_branch_coverage=1 00:15:41.684 --rc genhtml_function_coverage=1 00:15:41.684 --rc genhtml_legend=1 00:15:41.684 --rc geninfo_all_blocks=1 00:15:41.684 --rc geninfo_unexecuted_blocks=1 00:15:41.684 00:15:41.684 ' 00:15:41.684 04:11:43 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:41.684 04:11:43 -- nvmf/common.sh@7 -- # uname -s 00:15:41.684 04:11:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.684 04:11:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.684 04:11:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.684 04:11:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.684 04:11:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.684 04:11:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.684 04:11:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.684 04:11:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.684 04:11:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.684 04:11:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.684 04:11:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:15:41.684 04:11:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:15:41.684 04:11:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.684 04:11:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.684 04:11:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:41.684 04:11:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:41.684 04:11:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.684 04:11:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.684 04:11:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.684 04:11:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.684 04:11:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.684 04:11:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.684 04:11:43 -- paths/export.sh@5 -- # export PATH 00:15:41.684 04:11:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.684 04:11:43 -- nvmf/common.sh@46 -- # : 0 00:15:41.684 04:11:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:41.684 04:11:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:41.684 04:11:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:41.684 04:11:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.684 04:11:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.684 04:11:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:41.684 04:11:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:41.684 04:11:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:41.684 04:11:43 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:41.684 04:11:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:41.684 04:11:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.684 04:11:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:41.684 04:11:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:41.684 04:11:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:41.684 04:11:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.684 04:11:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.684 04:11:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.684 04:11:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:41.684 04:11:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:41.684 04:11:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:41.684 04:11:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:41.684 04:11:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:41.684 04:11:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:41.684 04:11:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:41.684 04:11:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:41.684 04:11:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:41.684 04:11:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:41.684 04:11:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:41.684 04:11:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:41.684 04:11:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:41.684 04:11:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:41.684 04:11:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:41.684 04:11:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:41.684 04:11:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:41.684 04:11:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:41.684 04:11:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:41.684 04:11:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:41.684 Cannot find device "nvmf_tgt_br" 00:15:41.684 04:11:43 -- nvmf/common.sh@154 -- # true 00:15:41.684 04:11:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:41.684 Cannot find device "nvmf_tgt_br2" 00:15:41.684 04:11:43 -- nvmf/common.sh@155 -- # true 00:15:41.684 04:11:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:41.684 04:11:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:41.684 Cannot find device "nvmf_tgt_br" 00:15:41.684 04:11:43 -- nvmf/common.sh@157 -- # true 00:15:41.684 04:11:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:41.684 Cannot find device "nvmf_tgt_br2" 00:15:41.684 04:11:43 -- nvmf/common.sh@158 -- # true 00:15:41.684 04:11:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:41.943 04:11:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:41.943 04:11:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:41.943 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.943 04:11:43 -- nvmf/common.sh@161 -- # true 00:15:41.943 04:11:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:41.943 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.943 04:11:43 -- nvmf/common.sh@162 -- # true 00:15:41.943 04:11:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:41.943 04:11:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:41.943 04:11:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:41.943 04:11:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:41.943 04:11:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:41.943 04:11:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:41.943 04:11:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:41.943 04:11:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:41.943 04:11:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:41.943 04:11:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:41.943 04:11:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:41.943 04:11:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:41.943 04:11:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:41.943 04:11:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:41.943 04:11:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:41.943 04:11:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:41.943 04:11:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:41.943 04:11:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:41.943 04:11:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:41.943 04:11:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:41.943 04:11:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:41.943 04:11:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:41.943 04:11:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:41.943 04:11:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:41.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:41.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:15:41.943 00:15:41.943 --- 10.0.0.2 ping statistics --- 00:15:41.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.943 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:41.943 04:11:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:41.943 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:41.943 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:15:41.943 00:15:41.943 --- 10.0.0.3 ping statistics --- 00:15:41.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.943 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:41.943 04:11:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:41.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:41.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:15:41.943 00:15:41.943 --- 10.0.0.1 ping statistics --- 00:15:41.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.944 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:41.944 04:11:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:41.944 04:11:43 -- nvmf/common.sh@421 -- # return 0 00:15:41.944 04:11:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:41.944 04:11:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:41.944 04:11:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:41.944 04:11:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:41.944 04:11:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:41.944 04:11:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:41.944 04:11:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:41.944 04:11:43 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:41.944 04:11:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:41.944 04:11:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:41.944 04:11:43 -- common/autotest_common.sh@10 -- # set +x 00:15:41.944 04:11:43 -- nvmf/common.sh@469 -- # nvmfpid=86250 00:15:41.944 04:11:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:41.944 04:11:43 -- nvmf/common.sh@470 -- # waitforlisten 86250 00:15:41.944 04:11:43 -- common/autotest_common.sh@829 -- # '[' -z 86250 ']' 00:15:41.944 04:11:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.944 04:11:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:41.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.944 04:11:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.944 04:11:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:41.944 04:11:43 -- common/autotest_common.sh@10 -- # set +x 00:15:42.202 [2024-11-26 04:11:43.745755] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:42.202 [2024-11-26 04:11:43.745833] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.202 [2024-11-26 04:11:43.883571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.202 [2024-11-26 04:11:43.946466] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:42.202 [2024-11-26 04:11:43.946593] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.202 [2024-11-26 04:11:43.946605] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.202 [2024-11-26 04:11:43.946613] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.202 [2024-11-26 04:11:43.946637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.138 04:11:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:43.138 04:11:44 -- common/autotest_common.sh@862 -- # return 0 00:15:43.138 04:11:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:43.138 04:11:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:43.138 04:11:44 -- common/autotest_common.sh@10 -- # set +x 00:15:43.138 04:11:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.138 04:11:44 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:43.138 04:11:44 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:43.138 04:11:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.138 04:11:44 -- common/autotest_common.sh@10 -- # set +x 00:15:43.138 [2024-11-26 04:11:44.815489] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.138 04:11:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.138 04:11:44 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:43.138 04:11:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.138 04:11:44 -- common/autotest_common.sh@10 -- # set +x 00:15:43.138 04:11:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.138 04:11:44 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:43.138 04:11:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.138 04:11:44 -- common/autotest_common.sh@10 -- # set +x 00:15:43.138 [2024-11-26 04:11:44.835635] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:43.138 04:11:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.138 04:11:44 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:43.138 04:11:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.138 04:11:44 -- common/autotest_common.sh@10 -- # set +x 00:15:43.138 04:11:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.138 04:11:44 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:43.138 04:11:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.138 04:11:44 -- common/autotest_common.sh@10 -- # set +x 00:15:43.138 malloc0 00:15:43.138 04:11:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.138 04:11:44 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:43.138 04:11:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.138 04:11:44 -- common/autotest_common.sh@10 -- # set +x 00:15:43.138 04:11:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.138 04:11:44 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:43.138 04:11:44 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:43.138 04:11:44 -- nvmf/common.sh@520 -- # config=() 00:15:43.138 04:11:44 -- nvmf/common.sh@520 -- # local subsystem config 00:15:43.138 04:11:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:43.138 04:11:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:43.138 { 00:15:43.138 "params": { 00:15:43.138 "name": "Nvme$subsystem", 00:15:43.138 "trtype": "$TEST_TRANSPORT", 00:15:43.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:43.138 "adrfam": "ipv4", 00:15:43.138 "trsvcid": "$NVMF_PORT", 00:15:43.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:43.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:43.138 "hdgst": ${hdgst:-false}, 00:15:43.138 "ddgst": ${ddgst:-false} 00:15:43.138 }, 00:15:43.138 "method": "bdev_nvme_attach_controller" 00:15:43.138 } 00:15:43.138 EOF 00:15:43.138 )") 00:15:43.138 04:11:44 -- nvmf/common.sh@542 -- # cat 00:15:43.138 04:11:44 -- nvmf/common.sh@544 -- # jq . 00:15:43.138 04:11:44 -- nvmf/common.sh@545 -- # IFS=, 00:15:43.138 04:11:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:43.138 "params": { 00:15:43.138 "name": "Nvme1", 00:15:43.138 "trtype": "tcp", 00:15:43.138 "traddr": "10.0.0.2", 00:15:43.138 "adrfam": "ipv4", 00:15:43.138 "trsvcid": "4420", 00:15:43.138 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:43.138 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:43.138 "hdgst": false, 00:15:43.138 "ddgst": false 00:15:43.138 }, 00:15:43.138 "method": "bdev_nvme_attach_controller" 00:15:43.138 }' 00:15:43.397 [2024-11-26 04:11:44.927665] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:43.397 [2024-11-26 04:11:44.927788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86301 ] 00:15:43.397 [2024-11-26 04:11:45.070295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.657 [2024-11-26 04:11:45.158688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.657 Running I/O for 10 seconds... 00:15:53.634 00:15:53.634 Latency(us) 00:15:53.634 [2024-11-26T04:11:55.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.634 [2024-11-26T04:11:55.402Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:53.634 Verification LBA range: start 0x0 length 0x1000 00:15:53.634 Nvme1n1 : 10.01 11082.33 86.58 0.00 0.00 11521.99 867.61 20375.74 00:15:53.634 [2024-11-26T04:11:55.402Z] =================================================================================================================== 00:15:53.634 [2024-11-26T04:11:55.402Z] Total : 11082.33 86.58 0.00 0.00 11521.99 867.61 20375.74 00:15:53.893 04:11:55 -- target/zcopy.sh@39 -- # perfpid=86424 00:15:53.893 04:11:55 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:53.893 04:11:55 -- common/autotest_common.sh@10 -- # set +x 00:15:53.893 04:11:55 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:53.893 04:11:55 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:53.893 04:11:55 -- nvmf/common.sh@520 -- # config=() 00:15:53.893 04:11:55 -- nvmf/common.sh@520 -- # local subsystem config 00:15:53.893 04:11:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:53.893 04:11:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:53.893 { 00:15:53.893 "params": { 00:15:53.893 "name": "Nvme$subsystem", 00:15:53.893 "trtype": "$TEST_TRANSPORT", 00:15:53.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:53.893 "adrfam": "ipv4", 00:15:53.893 "trsvcid": "$NVMF_PORT", 00:15:53.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:53.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:53.893 "hdgst": ${hdgst:-false}, 00:15:53.893 "ddgst": ${ddgst:-false} 00:15:53.893 }, 00:15:53.893 "method": "bdev_nvme_attach_controller" 00:15:53.893 } 00:15:53.893 EOF 00:15:53.893 )") 00:15:53.893 04:11:55 -- nvmf/common.sh@542 -- # cat 00:15:53.893 [2024-11-26 04:11:55.636521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.893 [2024-11-26 04:11:55.636564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.893 04:11:55 -- nvmf/common.sh@544 -- # jq . 00:15:53.893 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.893 04:11:55 -- nvmf/common.sh@545 -- # IFS=, 00:15:53.893 04:11:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:53.893 "params": { 00:15:53.893 "name": "Nvme1", 00:15:53.893 "trtype": "tcp", 00:15:53.893 "traddr": "10.0.0.2", 00:15:53.893 "adrfam": "ipv4", 00:15:53.894 "trsvcid": "4420", 00:15:53.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:53.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:53.894 "hdgst": false, 00:15:53.894 "ddgst": false 00:15:53.894 }, 00:15:53.894 "method": "bdev_nvme_attach_controller" 00:15:53.894 }' 00:15:53.894 [2024-11-26 04:11:55.648495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.894 [2024-11-26 04:11:55.648520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.894 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.154 [2024-11-26 04:11:55.660489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.154 [2024-11-26 04:11:55.660514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.154 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.154 [2024-11-26 04:11:55.672492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.154 [2024-11-26 04:11:55.672513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.154 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.154 [2024-11-26 04:11:55.684492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.154 [2024-11-26 04:11:55.684512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.154 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.154 [2024-11-26 04:11:55.694786] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:54.154 [2024-11-26 04:11:55.694869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86424 ] 00:15:54.154 [2024-11-26 04:11:55.696511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.154 [2024-11-26 04:11:55.696533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.154 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.154 [2024-11-26 04:11:55.708499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.154 [2024-11-26 04:11:55.708520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.154 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.154 [2024-11-26 04:11:55.720500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.154 [2024-11-26 04:11:55.720516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.154 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.154 [2024-11-26 04:11:55.732501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.154 [2024-11-26 04:11:55.732517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.154 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.154 [2024-11-26 04:11:55.744507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.154 [2024-11-26 04:11:55.744817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.154 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.154 [2024-11-26 04:11:55.756517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.154 [2024-11-26 04:11:55.756690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.154 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.154 [2024-11-26 04:11:55.768522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.154 [2024-11-26 04:11:55.768693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.154 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.154 [2024-11-26 04:11:55.780539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.154 [2024-11-26 04:11:55.780720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.154 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.154 [2024-11-26 04:11:55.792521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.154 [2024-11-26 04:11:55.792685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.154 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.154 [2024-11-26 04:11:55.804524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.154 [2024-11-26 04:11:55.804685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.154 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.154 [2024-11-26 04:11:55.816527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.154 [2024-11-26 04:11:55.816692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.154 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.154 [2024-11-26 04:11:55.828533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.154 [2024-11-26 04:11:55.828705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.154 [2024-11-26 04:11:55.832244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.154 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.154 [2024-11-26 04:11:55.840537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.154 [2024-11-26 04:11:55.840703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.154 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.154 [2024-11-26 04:11:55.852550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.154 [2024-11-26 04:11:55.852573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.154 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.155 [2024-11-26 04:11:55.864546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.155 [2024-11-26 04:11:55.864733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.155 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.155 [2024-11-26 04:11:55.876552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.155 [2024-11-26 04:11:55.876578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.155 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.155 [2024-11-26 04:11:55.888554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.155 [2024-11-26 04:11:55.888577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.155 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.155 [2024-11-26 04:11:55.900554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.155 [2024-11-26 04:11:55.900758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.155 [2024-11-26 04:11:55.900885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.155 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.155 [2024-11-26 04:11:55.912561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.155 [2024-11-26 04:11:55.912759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.414 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.414 [2024-11-26 04:11:55.924565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.414 [2024-11-26 04:11:55.924754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.415 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.415 [2024-11-26 04:11:55.936571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.415 [2024-11-26 04:11:55.936781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.415 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.415 [2024-11-26 04:11:55.948570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.415 [2024-11-26 04:11:55.948782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.415 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.415 [2024-11-26 04:11:55.960574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.415 [2024-11-26 04:11:55.960795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.415 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.415 [2024-11-26 04:11:55.972576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.415 [2024-11-26 04:11:55.972798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.415 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.415 [2024-11-26 04:11:55.984577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.415 [2024-11-26 04:11:55.984790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.415 2024/11/26 04:11:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.415 [2024-11-26 04:11:55.996583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.415 [2024-11-26 04:11:55.996609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.415 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.415 [2024-11-26 04:11:56.008578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.415 [2024-11-26 04:11:56.008602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.415 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.415 [2024-11-26 04:11:56.020579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.415 [2024-11-26 04:11:56.020765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.415 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.415 [2024-11-26 04:11:56.032607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.415 [2024-11-26 04:11:56.032635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.415 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.415 [2024-11-26 04:11:56.044630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.415 [2024-11-26 04:11:56.044655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.415 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.415 [2024-11-26 04:11:56.056686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.415 [2024-11-26 04:11:56.056740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.415 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.415 [2024-11-26 04:11:56.068652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.415 [2024-11-26 04:11:56.068857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.415 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.415 [2024-11-26 04:11:56.080684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.415 [2024-11-26 04:11:56.080913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.415 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.415 [2024-11-26 04:11:56.092779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.415 [2024-11-26 04:11:56.092809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.415 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.415 Running I/O for 5 seconds... 00:15:54.415 [2024-11-26 04:11:56.104742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.415 [2024-11-26 04:11:56.104778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.415 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.415 [2024-11-26 04:11:56.121751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.415 [2024-11-26 04:11:56.121782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.415 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.415 [2024-11-26 04:11:56.138463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.415 [2024-11-26 04:11:56.138496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.415 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.415 [2024-11-26 04:11:56.155819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.415 [2024-11-26 04:11:56.155851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.415 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.415 [2024-11-26 04:11:56.170871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.415 [2024-11-26 04:11:56.170902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.415 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.675 [2024-11-26 04:11:56.188132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.675 [2024-11-26 04:11:56.188165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.675 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.675 [2024-11-26 04:11:56.204286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.675 [2024-11-26 04:11:56.204318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.675 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.675 [2024-11-26 04:11:56.221458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.675 [2024-11-26 04:11:56.221635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.675 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.675 [2024-11-26 04:11:56.237674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.675 [2024-11-26 04:11:56.237887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.675 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.675 [2024-11-26 04:11:56.255075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.675 [2024-11-26 04:11:56.255108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.675 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.675 [2024-11-26 04:11:56.271512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.675 [2024-11-26 04:11:56.271544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.675 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.675 [2024-11-26 04:11:56.288095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.675 [2024-11-26 04:11:56.288127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.675 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.675 [2024-11-26 04:11:56.305095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.675 [2024-11-26 04:11:56.305266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.675 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.675 [2024-11-26 04:11:56.322044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.675 [2024-11-26 04:11:56.322077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.675 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.675 [2024-11-26 04:11:56.339072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.675 [2024-11-26 04:11:56.339129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.675 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.675 [2024-11-26 04:11:56.353415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.675 [2024-11-26 04:11:56.353447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.675 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.675 [2024-11-26 04:11:56.362238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.675 [2024-11-26 04:11:56.362424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.675 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.675 [2024-11-26 04:11:56.372503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.675 [2024-11-26 04:11:56.372651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.675 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.675 [2024-11-26 04:11:56.384039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.675 [2024-11-26 04:11:56.384205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.675 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.675 [2024-11-26 04:11:56.399090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.675 [2024-11-26 04:11:56.399240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.675 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.675 [2024-11-26 04:11:56.415650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.675 [2024-11-26 04:11:56.415682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.675 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.675 [2024-11-26 04:11:56.426517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.675 [2024-11-26 04:11:56.426665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.675 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.675 [2024-11-26 04:11:56.435695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.675 [2024-11-26 04:11:56.435753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.937 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.937 [2024-11-26 04:11:56.448869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.937 [2024-11-26 04:11:56.448899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.937 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.937 [2024-11-26 04:11:56.457380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.937 [2024-11-26 04:11:56.457530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.937 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.937 [2024-11-26 04:11:56.466883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.937 [2024-11-26 04:11:56.467051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.937 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.937 [2024-11-26 04:11:56.480047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.937 [2024-11-26 04:11:56.480193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.937 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.937 [2024-11-26 04:11:56.488350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.937 [2024-11-26 04:11:56.488382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.937 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.937 [2024-11-26 04:11:56.500038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.937 [2024-11-26 04:11:56.500071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.937 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.937 [2024-11-26 04:11:56.511005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.937 [2024-11-26 04:11:56.511171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.937 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.937 [2024-11-26 04:11:56.518646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.937 [2024-11-26 04:11:56.518855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.937 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.937 [2024-11-26 04:11:56.530743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.937 [2024-11-26 04:11:56.530904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.937 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.937 [2024-11-26 04:11:56.542855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.937 [2024-11-26 04:11:56.543007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.937 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.937 [2024-11-26 04:11:56.551274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.937 [2024-11-26 04:11:56.551437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.937 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.937 [2024-11-26 04:11:56.566022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.937 [2024-11-26 04:11:56.566054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.937 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.937 [2024-11-26 04:11:56.574603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.937 [2024-11-26 04:11:56.574634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.937 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.937 [2024-11-26 04:11:56.588236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.937 [2024-11-26 04:11:56.588268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.937 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.937 [2024-11-26 04:11:56.596638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.937 [2024-11-26 04:11:56.596668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.937 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.937 [2024-11-26 04:11:56.611451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.937 [2024-11-26 04:11:56.611608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.937 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.937 [2024-11-26 04:11:56.621047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.937 [2024-11-26 04:11:56.621242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.938 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.938 [2024-11-26 04:11:56.636180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.938 [2024-11-26 04:11:56.636329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.938 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.938 [2024-11-26 04:11:56.652511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.938 [2024-11-26 04:11:56.652663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.938 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.938 [2024-11-26 04:11:56.661943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.938 [2024-11-26 04:11:56.661974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.938 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.938 [2024-11-26 04:11:56.671257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.938 [2024-11-26 04:11:56.671290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.938 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.938 [2024-11-26 04:11:56.684060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.938 [2024-11-26 04:11:56.684108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.938 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.938 [2024-11-26 04:11:56.692241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.938 [2024-11-26 04:11:56.692390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.938 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.214 [2024-11-26 04:11:56.701828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.214 [2024-11-26 04:11:56.701991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.214 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.214 [2024-11-26 04:11:56.711602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.214 [2024-11-26 04:11:56.711792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.214 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.214 [2024-11-26 04:11:56.721490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.214 [2024-11-26 04:11:56.721637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.214 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.214 [2024-11-26 04:11:56.732609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.214 [2024-11-26 04:11:56.732804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.214 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.214 [2024-11-26 04:11:56.741154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.214 [2024-11-26 04:11:56.741336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.214 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.214 [2024-11-26 04:11:56.751841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.214 [2024-11-26 04:11:56.752007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.214 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.214 [2024-11-26 04:11:56.761213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.214 [2024-11-26 04:11:56.761246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.214 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.214 [2024-11-26 04:11:56.772538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.214 [2024-11-26 04:11:56.772569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.214 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.214 [2024-11-26 04:11:56.780893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.214 [2024-11-26 04:11:56.780924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.214 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.214 [2024-11-26 04:11:56.796535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.214 [2024-11-26 04:11:56.796700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.214 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.214 [2024-11-26 04:11:56.807263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.214 [2024-11-26 04:11:56.807420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.214 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.214 [2024-11-26 04:11:56.823092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.214 [2024-11-26 04:11:56.823242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.214 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.214 [2024-11-26 04:11:56.839464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.214 [2024-11-26 04:11:56.839611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.214 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.214 [2024-11-26 04:11:56.850639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.214 [2024-11-26 04:11:56.850864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.214 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.214 [2024-11-26 04:11:56.859295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.214 [2024-11-26 04:11:56.859441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.214 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.214 [2024-11-26 04:11:56.869311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.214 [2024-11-26 04:11:56.869342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.214 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.214 [2024-11-26 04:11:56.882259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.214 [2024-11-26 04:11:56.882291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.214 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.214 [2024-11-26 04:11:56.889859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.214 [2024-11-26 04:11:56.889890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.214 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.214 [2024-11-26 04:11:56.901387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.214 [2024-11-26 04:11:56.901538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.214 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.214 [2024-11-26 04:11:56.913211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.214 [2024-11-26 04:11:56.913358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.214 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.214 [2024-11-26 04:11:56.921622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.214 [2024-11-26 04:11:56.921823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.214 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.214 [2024-11-26 04:11:56.932181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.214 [2024-11-26 04:11:56.932332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.214 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.214 [2024-11-26 04:11:56.941764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.214 [2024-11-26 04:11:56.941927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.214 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.214 [2024-11-26 04:11:56.952930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.214 [2024-11-26 04:11:56.953081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.215 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.215 [2024-11-26 04:11:56.964583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.215 [2024-11-26 04:11:56.964615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:56.972555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:56.972586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:56.987676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:56.987743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:56.996640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:56.996672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:57.012857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:57.013018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:57.029558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:57.029706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:57.040775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:57.040926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:57.049144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:57.049293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:57.060588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:57.060619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:57.071151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:57.071182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:57.079110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:57.079141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:57.089940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:57.090106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:57.098934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:57.099114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:57.108313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:57.108481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:57.118082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:57.118233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:57.127777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:57.127927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:57.137659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:57.137691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:57.147475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:57.147507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:57.156633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:57.156665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:57.166381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:57.166512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:57.179758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:57.179887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:57.187976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:57.188141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:57.202177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:57.202341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:57.210067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:57.210226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:57.220985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:57.221150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:57.236586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:57.236618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.491 [2024-11-26 04:11:57.247530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.491 [2024-11-26 04:11:57.247562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.491 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.751 [2024-11-26 04:11:57.263281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.751 [2024-11-26 04:11:57.263313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.751 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.751 [2024-11-26 04:11:57.279421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.751 [2024-11-26 04:11:57.279452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.751 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.751 [2024-11-26 04:11:57.290268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.751 [2024-11-26 04:11:57.290406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.751 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.751 [2024-11-26 04:11:57.298007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.751 [2024-11-26 04:11:57.298160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.751 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.751 [2024-11-26 04:11:57.309396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.751 [2024-11-26 04:11:57.309523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.751 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.751 [2024-11-26 04:11:57.320410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.751 [2024-11-26 04:11:57.320563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.751 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.751 [2024-11-26 04:11:57.335503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.751 [2024-11-26 04:11:57.335631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.751 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.751 [2024-11-26 04:11:57.346280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.751 [2024-11-26 04:11:57.346419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.751 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.751 [2024-11-26 04:11:57.354938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.751 [2024-11-26 04:11:57.355097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.751 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.751 [2024-11-26 04:11:57.364045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.751 [2024-11-26 04:11:57.364077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.751 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.751 [2024-11-26 04:11:57.372931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.751 [2024-11-26 04:11:57.372963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.751 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.751 [2024-11-26 04:11:57.381929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.751 [2024-11-26 04:11:57.382108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.751 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.751 [2024-11-26 04:11:57.390983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.751 [2024-11-26 04:11:57.391111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.751 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.751 [2024-11-26 04:11:57.399999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.751 [2024-11-26 04:11:57.400129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.751 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.751 [2024-11-26 04:11:57.409409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.751 [2024-11-26 04:11:57.409552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.751 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.751 [2024-11-26 04:11:57.418980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.751 [2024-11-26 04:11:57.419138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.751 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.751 [2024-11-26 04:11:57.433629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.751 [2024-11-26 04:11:57.433807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.751 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.751 [2024-11-26 04:11:57.444404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.751 [2024-11-26 04:11:57.444434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.751 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.751 [2024-11-26 04:11:57.452213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.751 [2024-11-26 04:11:57.452243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.752 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.752 [2024-11-26 04:11:57.463321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.752 [2024-11-26 04:11:57.463471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.752 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.752 [2024-11-26 04:11:57.474678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.752 [2024-11-26 04:11:57.474864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.752 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.752 [2024-11-26 04:11:57.490500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.752 [2024-11-26 04:11:57.490639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.752 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.752 [2024-11-26 04:11:57.507759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.752 [2024-11-26 04:11:57.507790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.752 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.011 [2024-11-26 04:11:57.516994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.011 [2024-11-26 04:11:57.517025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.011 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.011 [2024-11-26 04:11:57.527579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.011 [2024-11-26 04:11:57.527610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.012 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.012 [2024-11-26 04:11:57.535956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.012 [2024-11-26 04:11:57.535988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.012 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.012 [2024-11-26 04:11:57.548000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.012 [2024-11-26 04:11:57.548031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.012 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.012 [2024-11-26 04:11:57.555889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.012 [2024-11-26 04:11:57.555920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.012 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.012 [2024-11-26 04:11:57.564962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.012 [2024-11-26 04:11:57.564988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.012 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.012 [2024-11-26 04:11:57.573180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.012 [2024-11-26 04:11:57.573206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.012 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.012 [2024-11-26 04:11:57.582264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.012 [2024-11-26 04:11:57.582293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.012 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.012 [2024-11-26 04:11:57.591103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.012 [2024-11-26 04:11:57.591128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.012 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.012 [2024-11-26 04:11:57.599815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.012 [2024-11-26 04:11:57.599840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.012 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.012 [2024-11-26 04:11:57.609137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.012 [2024-11-26 04:11:57.609164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.012 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.012 [2024-11-26 04:11:57.618727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.012 [2024-11-26 04:11:57.618776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.012 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.012 [2024-11-26 04:11:57.631223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.012 [2024-11-26 04:11:57.631250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.012 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.012 [2024-11-26 04:11:57.639644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.012 [2024-11-26 04:11:57.639669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.012 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.012 [2024-11-26 04:11:57.652370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.012 [2024-11-26 04:11:57.652398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.012 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.012 [2024-11-26 04:11:57.663422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.012 [2024-11-26 04:11:57.663448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.012 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.012 [2024-11-26 04:11:57.670981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.012 [2024-11-26 04:11:57.671008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.012 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.012 [2024-11-26 04:11:57.682149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.012 [2024-11-26 04:11:57.682177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.012 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.012 [2024-11-26 04:11:57.690553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.012 [2024-11-26 04:11:57.690580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.012 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.012 [2024-11-26 04:11:57.701230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.012 [2024-11-26 04:11:57.701257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.012 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.012 [2024-11-26 04:11:57.711929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.012 [2024-11-26 04:11:57.711956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.012 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.012 [2024-11-26 04:11:57.719911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.012 [2024-11-26 04:11:57.719937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.012 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.012 [2024-11-26 04:11:57.731210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.012 [2024-11-26 04:11:57.731236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.012 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.012 [2024-11-26 04:11:57.742858] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.012 [2024-11-26 04:11:57.742883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.012 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.012 [2024-11-26 04:11:57.758408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.012 [2024-11-26 04:11:57.758436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.012 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.272 [2024-11-26 04:11:57.775143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.272 [2024-11-26 04:11:57.775170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.272 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.272 [2024-11-26 04:11:57.791464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.272 [2024-11-26 04:11:57.791492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.272 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.272 [2024-11-26 04:11:57.808117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.272 [2024-11-26 04:11:57.808144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.272 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.272 [2024-11-26 04:11:57.824277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.272 [2024-11-26 04:11:57.824304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.272 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.272 [2024-11-26 04:11:57.841090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.272 [2024-11-26 04:11:57.841117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.272 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.272 [2024-11-26 04:11:57.853252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.272 [2024-11-26 04:11:57.853294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.272 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.272 [2024-11-26 04:11:57.864288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.272 [2024-11-26 04:11:57.864329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.272 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.272 [2024-11-26 04:11:57.880688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.272 [2024-11-26 04:11:57.880739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.272 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.272 [2024-11-26 04:11:57.891404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.272 [2024-11-26 04:11:57.891431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.272 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.272 [2024-11-26 04:11:57.899104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.272 [2024-11-26 04:11:57.899161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.272 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.272 [2024-11-26 04:11:57.910133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.272 [2024-11-26 04:11:57.910161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.272 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.272 [2024-11-26 04:11:57.921325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.272 [2024-11-26 04:11:57.921351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.272 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.272 [2024-11-26 04:11:57.937167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.272 [2024-11-26 04:11:57.937193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.272 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.272 [2024-11-26 04:11:57.953664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.272 [2024-11-26 04:11:57.953690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.272 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.272 [2024-11-26 04:11:57.970047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.272 [2024-11-26 04:11:57.970075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.272 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.272 [2024-11-26 04:11:57.986349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.272 [2024-11-26 04:11:57.986375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.272 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.272 [2024-11-26 04:11:57.997017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.272 [2024-11-26 04:11:57.997044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.272 2024/11/26 04:11:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.272 [2024-11-26 04:11:58.005316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.272 [2024-11-26 04:11:58.005342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.272 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.272 [2024-11-26 04:11:58.015888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.272 [2024-11-26 04:11:58.015914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.272 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.273 [2024-11-26 04:11:58.024055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.273 [2024-11-26 04:11:58.024096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.273 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.273 [2024-11-26 04:11:58.033212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.273 [2024-11-26 04:11:58.033238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.532 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.532 [2024-11-26 04:11:58.042190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.532 [2024-11-26 04:11:58.042217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.532 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.532 [2024-11-26 04:11:58.051334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.532 [2024-11-26 04:11:58.051360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.532 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.532 [2024-11-26 04:11:58.060619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.532 [2024-11-26 04:11:58.060645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.532 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.532 [2024-11-26 04:11:58.069263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.532 [2024-11-26 04:11:58.069288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.532 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.532 [2024-11-26 04:11:58.082343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.532 [2024-11-26 04:11:58.082370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.532 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.532 [2024-11-26 04:11:58.098624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.532 [2024-11-26 04:11:58.098652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.532 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.532 [2024-11-26 04:11:58.109037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.532 [2024-11-26 04:11:58.109065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.532 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.532 [2024-11-26 04:11:58.116650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.532 [2024-11-26 04:11:58.116676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.532 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.532 [2024-11-26 04:11:58.127991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.532 [2024-11-26 04:11:58.128018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.532 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.532 [2024-11-26 04:11:58.136230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.532 [2024-11-26 04:11:58.136256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.532 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.533 [2024-11-26 04:11:58.146983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.533 [2024-11-26 04:11:58.147010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.533 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.533 [2024-11-26 04:11:58.157612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.533 [2024-11-26 04:11:58.157638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.533 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.533 [2024-11-26 04:11:58.165411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.533 [2024-11-26 04:11:58.165437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.533 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.533 [2024-11-26 04:11:58.176177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.533 [2024-11-26 04:11:58.176203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.533 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.533 [2024-11-26 04:11:58.184750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.533 [2024-11-26 04:11:58.184775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.533 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.533 [2024-11-26 04:11:58.193379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.533 [2024-11-26 04:11:58.193405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.533 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.533 [2024-11-26 04:11:58.202510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.533 [2024-11-26 04:11:58.202537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.533 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.533 [2024-11-26 04:11:58.211429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.533 [2024-11-26 04:11:58.211456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.533 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.533 [2024-11-26 04:11:58.220381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.533 [2024-11-26 04:11:58.220407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.533 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.533 [2024-11-26 04:11:58.229356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.533 [2024-11-26 04:11:58.229383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.533 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.533 [2024-11-26 04:11:58.238500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.533 [2024-11-26 04:11:58.238526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.533 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.533 [2024-11-26 04:11:58.247092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.533 [2024-11-26 04:11:58.247118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.533 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.533 [2024-11-26 04:11:58.255806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.533 [2024-11-26 04:11:58.255833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.533 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.533 [2024-11-26 04:11:58.264518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.533 [2024-11-26 04:11:58.264545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.533 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.533 [2024-11-26 04:11:58.273179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.533 [2024-11-26 04:11:58.273205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.533 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.533 [2024-11-26 04:11:58.281905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.533 [2024-11-26 04:11:58.281931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.533 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.533 [2024-11-26 04:11:58.292021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.533 [2024-11-26 04:11:58.292048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.533 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.793 [2024-11-26 04:11:58.302774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.793 [2024-11-26 04:11:58.302798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.793 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.793 [2024-11-26 04:11:58.309910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.793 [2024-11-26 04:11:58.309937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.793 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.793 [2024-11-26 04:11:58.321082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.793 [2024-11-26 04:11:58.321108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.793 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.793 [2024-11-26 04:11:58.329672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.793 [2024-11-26 04:11:58.329698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.793 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.793 [2024-11-26 04:11:58.338439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.793 [2024-11-26 04:11:58.338466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.793 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.793 [2024-11-26 04:11:58.347387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.793 [2024-11-26 04:11:58.347413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.793 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.793 [2024-11-26 04:11:58.356249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.793 [2024-11-26 04:11:58.356275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.793 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.793 [2024-11-26 04:11:58.365285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.793 [2024-11-26 04:11:58.365312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.793 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.793 [2024-11-26 04:11:58.374167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.793 [2024-11-26 04:11:58.374193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.793 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.793 [2024-11-26 04:11:58.383001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.793 [2024-11-26 04:11:58.383028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.793 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.793 [2024-11-26 04:11:58.391797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.793 [2024-11-26 04:11:58.391823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.793 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.793 [2024-11-26 04:11:58.400739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.793 [2024-11-26 04:11:58.400765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.793 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.793 [2024-11-26 04:11:58.409815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.793 [2024-11-26 04:11:58.409853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.793 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.793 [2024-11-26 04:11:58.418989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.793 [2024-11-26 04:11:58.419015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.793 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.793 [2024-11-26 04:11:58.428024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.793 [2024-11-26 04:11:58.428051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.793 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.793 [2024-11-26 04:11:58.436731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.793 [2024-11-26 04:11:58.436771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.793 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.793 [2024-11-26 04:11:58.445631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.793 [2024-11-26 04:11:58.445656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.793 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.793 [2024-11-26 04:11:58.454611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.793 [2024-11-26 04:11:58.454637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.793 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.793 [2024-11-26 04:11:58.463268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.793 [2024-11-26 04:11:58.463294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.793 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.793 [2024-11-26 04:11:58.472453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.793 [2024-11-26 04:11:58.472479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.794 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.794 [2024-11-26 04:11:58.481882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.794 [2024-11-26 04:11:58.481909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.794 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.794 [2024-11-26 04:11:58.490853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.794 [2024-11-26 04:11:58.490880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.794 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.794 [2024-11-26 04:11:58.502879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.794 [2024-11-26 04:11:58.502906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.794 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.794 [2024-11-26 04:11:58.513168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.794 [2024-11-26 04:11:58.513194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.794 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.794 [2024-11-26 04:11:58.520890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.794 [2024-11-26 04:11:58.520917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.794 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.794 [2024-11-26 04:11:58.532450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.794 [2024-11-26 04:11:58.532477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.794 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.794 [2024-11-26 04:11:58.540922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.794 [2024-11-26 04:11:58.540948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.794 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.794 [2024-11-26 04:11:58.552675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.794 [2024-11-26 04:11:58.552702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.794 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.054 [2024-11-26 04:11:58.563048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.054 [2024-11-26 04:11:58.563074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.054 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.054 [2024-11-26 04:11:58.570835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.054 [2024-11-26 04:11:58.570862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.054 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.054 [2024-11-26 04:11:58.585651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.054 [2024-11-26 04:11:58.585677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.054 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.054 [2024-11-26 04:11:58.594438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.054 [2024-11-26 04:11:58.594464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.054 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.054 [2024-11-26 04:11:58.607413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.054 [2024-11-26 04:11:58.607440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.054 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.054 [2024-11-26 04:11:58.615506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.054 [2024-11-26 04:11:58.615532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.054 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.054 [2024-11-26 04:11:58.626087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.054 [2024-11-26 04:11:58.626129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.054 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.054 [2024-11-26 04:11:58.634799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.054 [2024-11-26 04:11:58.634826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.054 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.054 [2024-11-26 04:11:58.647699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.054 [2024-11-26 04:11:58.647783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.054 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.054 [2024-11-26 04:11:58.656405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.054 [2024-11-26 04:11:58.656430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.054 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.054 [2024-11-26 04:11:58.666661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.054 [2024-11-26 04:11:58.666687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.055 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.055 [2024-11-26 04:11:58.675590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.055 [2024-11-26 04:11:58.675617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.055 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.055 [2024-11-26 04:11:58.689055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.055 [2024-11-26 04:11:58.689081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.055 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.055 [2024-11-26 04:11:58.704278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.055 [2024-11-26 04:11:58.704305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.055 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.055 [2024-11-26 04:11:58.720779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.055 [2024-11-26 04:11:58.720800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.055 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.055 [2024-11-26 04:11:58.732608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.055 [2024-11-26 04:11:58.732635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.055 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.055 [2024-11-26 04:11:58.747670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.055 [2024-11-26 04:11:58.747695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.055 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.055 [2024-11-26 04:11:58.764033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.055 [2024-11-26 04:11:58.764061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.055 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.055 [2024-11-26 04:11:58.780941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.055 [2024-11-26 04:11:58.780968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.055 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.055 [2024-11-26 04:11:58.791160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.055 [2024-11-26 04:11:58.791185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.055 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.055 [2024-11-26 04:11:58.806679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.055 [2024-11-26 04:11:58.806718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.055 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.315 [2024-11-26 04:11:58.817235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.315 [2024-11-26 04:11:58.817261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.315 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.315 [2024-11-26 04:11:58.832487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.315 [2024-11-26 04:11:58.832515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.315 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.315 [2024-11-26 04:11:58.849288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.315 [2024-11-26 04:11:58.849315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.315 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.315 [2024-11-26 04:11:58.865096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.315 [2024-11-26 04:11:58.865123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.315 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.315 [2024-11-26 04:11:58.881643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.315 [2024-11-26 04:11:58.881671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.315 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.315 [2024-11-26 04:11:58.898616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.315 [2024-11-26 04:11:58.898643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.315 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.315 [2024-11-26 04:11:58.914201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.315 [2024-11-26 04:11:58.914228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.315 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.315 [2024-11-26 04:11:58.930651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.315 [2024-11-26 04:11:58.930679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.315 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.315 [2024-11-26 04:11:58.946925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.315 [2024-11-26 04:11:58.946951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.315 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.315 [2024-11-26 04:11:58.958679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.315 [2024-11-26 04:11:58.958706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.315 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.315 [2024-11-26 04:11:58.974828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.315 [2024-11-26 04:11:58.974853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.315 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.315 [2024-11-26 04:11:58.990499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.315 [2024-11-26 04:11:58.990526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.315 2024/11/26 04:11:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.315 [2024-11-26 04:11:59.006667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.315 [2024-11-26 04:11:59.006694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.315 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.315 [2024-11-26 04:11:59.023818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.315 [2024-11-26 04:11:59.023844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.315 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.315 [2024-11-26 04:11:59.039427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.315 [2024-11-26 04:11:59.039455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.315 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.315 [2024-11-26 04:11:59.051308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.315 [2024-11-26 04:11:59.051335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.315 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.315 [2024-11-26 04:11:59.067030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.315 [2024-11-26 04:11:59.067057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.315 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.576 [2024-11-26 04:11:59.083900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.576 [2024-11-26 04:11:59.083926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.576 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.576 [2024-11-26 04:11:59.099169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.576 [2024-11-26 04:11:59.099195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.576 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.576 [2024-11-26 04:11:59.113771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.576 [2024-11-26 04:11:59.113797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.576 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.576 [2024-11-26 04:11:59.125390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.576 [2024-11-26 04:11:59.125417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.576 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.576 [2024-11-26 04:11:59.133085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.576 [2024-11-26 04:11:59.133112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.576 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.576 [2024-11-26 04:11:59.148426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.576 [2024-11-26 04:11:59.148455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.576 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.576 [2024-11-26 04:11:59.160431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.576 [2024-11-26 04:11:59.160457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.576 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.576 [2024-11-26 04:11:59.176563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.576 [2024-11-26 04:11:59.176591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.576 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.576 [2024-11-26 04:11:59.191660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.576 [2024-11-26 04:11:59.191687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.577 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.577 [2024-11-26 04:11:59.206884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.577 [2024-11-26 04:11:59.206910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.577 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.577 [2024-11-26 04:11:59.222731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.577 [2024-11-26 04:11:59.222756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.577 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.577 [2024-11-26 04:11:59.233833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.577 [2024-11-26 04:11:59.233858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.577 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.577 [2024-11-26 04:11:59.249766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.577 [2024-11-26 04:11:59.249791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.577 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.577 [2024-11-26 04:11:59.265404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.577 [2024-11-26 04:11:59.265431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.577 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.577 [2024-11-26 04:11:59.277106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.577 [2024-11-26 04:11:59.277133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.577 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.577 [2024-11-26 04:11:59.292777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.577 [2024-11-26 04:11:59.292803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.577 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.577 [2024-11-26 04:11:59.308392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.577 [2024-11-26 04:11:59.308419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.577 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.577 [2024-11-26 04:11:59.320478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.577 [2024-11-26 04:11:59.320504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.577 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.577 [2024-11-26 04:11:59.335216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.577 [2024-11-26 04:11:59.335242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.577 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.840 [2024-11-26 04:11:59.347153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.840 [2024-11-26 04:11:59.347180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.840 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.840 [2024-11-26 04:11:59.363157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.840 [2024-11-26 04:11:59.363184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.840 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.840 [2024-11-26 04:11:59.378597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.840 [2024-11-26 04:11:59.378828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.840 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.840 [2024-11-26 04:11:59.390106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.840 [2024-11-26 04:11:59.390242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.840 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.840 [2024-11-26 04:11:59.406014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.840 [2024-11-26 04:11:59.406046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.840 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.840 [2024-11-26 04:11:59.421785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.840 [2024-11-26 04:11:59.421815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.840 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.840 [2024-11-26 04:11:59.437884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.840 [2024-11-26 04:11:59.437914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.840 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.840 [2024-11-26 04:11:59.445973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.840 [2024-11-26 04:11:59.446026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.840 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.840 [2024-11-26 04:11:59.460599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.840 [2024-11-26 04:11:59.460632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.840 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.840 [2024-11-26 04:11:59.475996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.840 [2024-11-26 04:11:59.476128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.840 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.840 [2024-11-26 04:11:59.492523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.840 [2024-11-26 04:11:59.492554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.840 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.840 [2024-11-26 04:11:59.508880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.840 [2024-11-26 04:11:59.508911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.840 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.840 [2024-11-26 04:11:59.519681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.840 [2024-11-26 04:11:59.519729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.840 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.840 [2024-11-26 04:11:59.535280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.840 [2024-11-26 04:11:59.535311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.840 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.840 [2024-11-26 04:11:59.551069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.840 [2024-11-26 04:11:59.551100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.840 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.840 [2024-11-26 04:11:59.560039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.840 [2024-11-26 04:11:59.560071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.840 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.840 [2024-11-26 04:11:59.575725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.840 [2024-11-26 04:11:59.575755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.840 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.840 [2024-11-26 04:11:59.589815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.840 [2024-11-26 04:11:59.589846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.840 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.100 [2024-11-26 04:11:59.605055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.100 [2024-11-26 04:11:59.605087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.100 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.100 [2024-11-26 04:11:59.621349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.100 [2024-11-26 04:11:59.621381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.100 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.100 [2024-11-26 04:11:59.637231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.100 [2024-11-26 04:11:59.637280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.100 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.100 [2024-11-26 04:11:59.648759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.100 [2024-11-26 04:11:59.648800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.100 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.100 [2024-11-26 04:11:59.664988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.100 [2024-11-26 04:11:59.665037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.100 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.100 [2024-11-26 04:11:59.673851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.100 [2024-11-26 04:11:59.673883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.100 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.100 [2024-11-26 04:11:59.690167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.100 [2024-11-26 04:11:59.690216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.100 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.100 [2024-11-26 04:11:59.706313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.100 [2024-11-26 04:11:59.706362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.100 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.100 [2024-11-26 04:11:59.723340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.100 [2024-11-26 04:11:59.723373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.100 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.100 [2024-11-26 04:11:59.738804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.100 [2024-11-26 04:11:59.738837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.100 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.100 [2024-11-26 04:11:59.754575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.100 [2024-11-26 04:11:59.754625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.100 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.100 [2024-11-26 04:11:59.771117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.100 [2024-11-26 04:11:59.771150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.100 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.100 [2024-11-26 04:11:59.787651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.100 [2024-11-26 04:11:59.787699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.100 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.100 [2024-11-26 04:11:59.804078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.100 [2024-11-26 04:11:59.804127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.100 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.100 [2024-11-26 04:11:59.820681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.100 [2024-11-26 04:11:59.820754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.100 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.100 [2024-11-26 04:11:59.831723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.100 [2024-11-26 04:11:59.831766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.100 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.100 [2024-11-26 04:11:59.847848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.100 [2024-11-26 04:11:59.847896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.100 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.359 [2024-11-26 04:11:59.864478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.359 [2024-11-26 04:11:59.864526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.359 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.359 [2024-11-26 04:11:59.881103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.359 [2024-11-26 04:11:59.881166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.360 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.360 [2024-11-26 04:11:59.897888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.360 [2024-11-26 04:11:59.897936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.360 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.360 [2024-11-26 04:11:59.913958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.360 [2024-11-26 04:11:59.914019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.360 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.360 [2024-11-26 04:11:59.924663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.360 [2024-11-26 04:11:59.924695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.360 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.360 [2024-11-26 04:11:59.940895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.360 [2024-11-26 04:11:59.940942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.360 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.360 [2024-11-26 04:11:59.956696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.360 [2024-11-26 04:11:59.956737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.360 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.360 [2024-11-26 04:11:59.973384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.360 [2024-11-26 04:11:59.973417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.360 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.360 [2024-11-26 04:11:59.989506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.360 [2024-11-26 04:11:59.989537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.360 2024/11/26 04:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.360 [2024-11-26 04:12:00.007236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.360 [2024-11-26 04:12:00.007304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.360 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.360 [2024-11-26 04:12:00.023912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.360 [2024-11-26 04:12:00.023991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.360 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.360 [2024-11-26 04:12:00.038969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.360 [2024-11-26 04:12:00.039019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.360 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.360 [2024-11-26 04:12:00.047489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.360 [2024-11-26 04:12:00.047538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.360 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.360 [2024-11-26 04:12:00.058292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.360 [2024-11-26 04:12:00.058361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.360 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.360 [2024-11-26 04:12:00.069195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.360 [2024-11-26 04:12:00.069241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.360 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.360 [2024-11-26 04:12:00.085818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.360 [2024-11-26 04:12:00.085851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.360 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.360 [2024-11-26 04:12:00.102169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.360 [2024-11-26 04:12:00.102208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.360 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.360 [2024-11-26 04:12:00.119210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.360 [2024-11-26 04:12:00.119245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.619 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.619 [2024-11-26 04:12:00.129677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.619 [2024-11-26 04:12:00.129720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.619 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.619 [2024-11-26 04:12:00.145851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.619 [2024-11-26 04:12:00.145885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.619 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.619 [2024-11-26 04:12:00.155271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.619 [2024-11-26 04:12:00.155319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.619 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.619 [2024-11-26 04:12:00.169399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.619 [2024-11-26 04:12:00.169431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.619 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.619 [2024-11-26 04:12:00.177806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.619 [2024-11-26 04:12:00.177849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.619 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.619 [2024-11-26 04:12:00.192223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.619 [2024-11-26 04:12:00.192257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.619 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.619 [2024-11-26 04:12:00.200268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.619 [2024-11-26 04:12:00.200300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.619 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.619 [2024-11-26 04:12:00.215472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.619 [2024-11-26 04:12:00.215507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.619 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.619 [2024-11-26 04:12:00.224570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.619 [2024-11-26 04:12:00.224618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.619 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.619 [2024-11-26 04:12:00.240847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.620 [2024-11-26 04:12:00.240880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.620 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.620 [2024-11-26 04:12:00.257193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.620 [2024-11-26 04:12:00.257227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.620 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.620 [2024-11-26 04:12:00.274691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.620 [2024-11-26 04:12:00.274751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.620 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.620 [2024-11-26 04:12:00.285008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.620 [2024-11-26 04:12:00.285041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.620 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.620 [2024-11-26 04:12:00.300426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.620 [2024-11-26 04:12:00.300460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.620 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.620 [2024-11-26 04:12:00.316603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.620 [2024-11-26 04:12:00.316638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.620 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.620 [2024-11-26 04:12:00.333224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.620 [2024-11-26 04:12:00.333258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.620 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.620 [2024-11-26 04:12:00.344326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.620 [2024-11-26 04:12:00.344358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.620 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.620 [2024-11-26 04:12:00.360458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.620 [2024-11-26 04:12:00.360493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.620 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.620 [2024-11-26 04:12:00.376518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.620 [2024-11-26 04:12:00.376552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.620 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.879 [2024-11-26 04:12:00.393096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.879 [2024-11-26 04:12:00.393286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.880 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.880 [2024-11-26 04:12:00.409524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.880 [2024-11-26 04:12:00.409556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.880 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.880 [2024-11-26 04:12:00.427117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.880 [2024-11-26 04:12:00.427150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.880 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.880 [2024-11-26 04:12:00.442575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.880 [2024-11-26 04:12:00.442724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.880 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.880 [2024-11-26 04:12:00.459260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.880 [2024-11-26 04:12:00.459292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.880 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.880 [2024-11-26 04:12:00.475894] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.880 [2024-11-26 04:12:00.475926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.880 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.880 [2024-11-26 04:12:00.491496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.880 [2024-11-26 04:12:00.491530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.880 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.880 [2024-11-26 04:12:00.503251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.880 [2024-11-26 04:12:00.503281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.880 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.880 [2024-11-26 04:12:00.519816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.880 [2024-11-26 04:12:00.519847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.880 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.880 [2024-11-26 04:12:00.536018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.880 [2024-11-26 04:12:00.536050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.880 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.880 [2024-11-26 04:12:00.552206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.880 [2024-11-26 04:12:00.552238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.880 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.880 [2024-11-26 04:12:00.568019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.880 [2024-11-26 04:12:00.568051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.880 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.880 [2024-11-26 04:12:00.584308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.880 [2024-11-26 04:12:00.584339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.880 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.880 [2024-11-26 04:12:00.595782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.880 [2024-11-26 04:12:00.595815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.880 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.880 [2024-11-26 04:12:00.611637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.880 [2024-11-26 04:12:00.611668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.880 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.880 [2024-11-26 04:12:00.627949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.880 [2024-11-26 04:12:00.627981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.880 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.140 [2024-11-26 04:12:00.643958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.140 [2024-11-26 04:12:00.643991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.140 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.140 [2024-11-26 04:12:00.654018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.140 [2024-11-26 04:12:00.654197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.140 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.140 [2024-11-26 04:12:00.669582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.140 [2024-11-26 04:12:00.669741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.140 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.140 [2024-11-26 04:12:00.686632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.140 [2024-11-26 04:12:00.686664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.140 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.140 [2024-11-26 04:12:00.703352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.140 [2024-11-26 04:12:00.703385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.140 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.140 [2024-11-26 04:12:00.713763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.140 [2024-11-26 04:12:00.713792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.140 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.140 [2024-11-26 04:12:00.729595] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.140 [2024-11-26 04:12:00.729741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.140 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.140 [2024-11-26 04:12:00.745680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.140 [2024-11-26 04:12:00.745855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.140 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.140 [2024-11-26 04:12:00.756447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.140 [2024-11-26 04:12:00.756479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.140 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.140 [2024-11-26 04:12:00.772003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.140 [2024-11-26 04:12:00.772036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.140 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.140 [2024-11-26 04:12:00.788365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.140 [2024-11-26 04:12:00.788397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.140 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.140 [2024-11-26 04:12:00.798921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.140 [2024-11-26 04:12:00.798952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.140 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.140 [2024-11-26 04:12:00.814906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.140 [2024-11-26 04:12:00.814938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.140 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.140 [2024-11-26 04:12:00.825369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.140 [2024-11-26 04:12:00.825506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.140 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.140 [2024-11-26 04:12:00.841773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.140 [2024-11-26 04:12:00.841805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.140 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.141 [2024-11-26 04:12:00.857767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.141 [2024-11-26 04:12:00.857798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.141 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.141 [2024-11-26 04:12:00.874043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.141 [2024-11-26 04:12:00.874076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.141 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.141 [2024-11-26 04:12:00.890348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.141 [2024-11-26 04:12:00.890380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.141 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.404 [2024-11-26 04:12:00.902270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.404 [2024-11-26 04:12:00.902306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.404 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.404 [2024-11-26 04:12:00.918490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.404 [2024-11-26 04:12:00.918524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.404 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.404 [2024-11-26 04:12:00.934962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.404 [2024-11-26 04:12:00.934996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.404 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.404 [2024-11-26 04:12:00.945960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.404 [2024-11-26 04:12:00.946018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.404 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.404 [2024-11-26 04:12:00.961901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.404 [2024-11-26 04:12:00.961932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.404 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.404 [2024-11-26 04:12:00.978030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.404 [2024-11-26 04:12:00.978062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.404 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.404 [2024-11-26 04:12:00.994720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.404 [2024-11-26 04:12:00.994778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.404 2024/11/26 04:12:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.404 [2024-11-26 04:12:01.011131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.404 [2024-11-26 04:12:01.011164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.404 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.404 [2024-11-26 04:12:01.021610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.404 [2024-11-26 04:12:01.021818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.404 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.404 [2024-11-26 04:12:01.037772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.404 [2024-11-26 04:12:01.037803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.404 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.404 [2024-11-26 04:12:01.054220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.404 [2024-11-26 04:12:01.054249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.404 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.404 [2024-11-26 04:12:01.071150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.404 [2024-11-26 04:12:01.071178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.404 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.404 [2024-11-26 04:12:01.080218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.404 [2024-11-26 04:12:01.080244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.404 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.404 [2024-11-26 04:12:01.094120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.404 [2024-11-26 04:12:01.094154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.404 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.404 [2024-11-26 04:12:01.102641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.404 [2024-11-26 04:12:01.102668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.404 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.404 00:15:59.404 Latency(us) 00:15:59.404 [2024-11-26T04:12:01.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.404 [2024-11-26T04:12:01.172Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:59.404 Nvme1n1 : 5.01 14123.34 110.34 0.00 0.00 9053.72 3813.00 21448.15 00:15:59.404 [2024-11-26T04:12:01.172Z] =================================================================================================================== 00:15:59.404 [2024-11-26T04:12:01.172Z] Total : 14123.34 110.34 0.00 0.00 9053.72 3813.00 21448.15 00:15:59.404 [2024-11-26 04:12:01.112658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.404 [2024-11-26 04:12:01.112680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.404 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.404 [2024-11-26 04:12:01.120652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.404 [2024-11-26 04:12:01.120676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.404 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.404 [2024-11-26 04:12:01.132654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.404 [2024-11-26 04:12:01.132677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.404 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.404 [2024-11-26 04:12:01.140648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.404 [2024-11-26 04:12:01.140674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.404 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.404 [2024-11-26 04:12:01.152654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.404 [2024-11-26 04:12:01.152675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.404 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.404 [2024-11-26 04:12:01.164658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.404 [2024-11-26 04:12:01.164678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.664 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.664 [2024-11-26 04:12:01.172650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.664 [2024-11-26 04:12:01.172670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.664 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.664 [2024-11-26 04:12:01.180657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.664 [2024-11-26 04:12:01.180677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.664 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.664 [2024-11-26 04:12:01.188658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.664 [2024-11-26 04:12:01.188678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.664 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.664 [2024-11-26 04:12:01.196660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.664 [2024-11-26 04:12:01.196681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.664 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.664 [2024-11-26 04:12:01.208666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.664 [2024-11-26 04:12:01.208688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.664 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.664 [2024-11-26 04:12:01.220683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.664 [2024-11-26 04:12:01.220704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.664 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.664 [2024-11-26 04:12:01.232673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.664 [2024-11-26 04:12:01.232693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.664 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.664 [2024-11-26 04:12:01.244678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.665 [2024-11-26 04:12:01.244701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.665 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.665 [2024-11-26 04:12:01.252672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.665 [2024-11-26 04:12:01.252691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.665 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.665 [2024-11-26 04:12:01.264678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.665 [2024-11-26 04:12:01.264702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.665 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.665 [2024-11-26 04:12:01.272676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.665 [2024-11-26 04:12:01.272702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.665 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.665 [2024-11-26 04:12:01.284683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.665 [2024-11-26 04:12:01.284704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.665 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.665 [2024-11-26 04:12:01.292680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.665 [2024-11-26 04:12:01.292701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.665 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.665 [2024-11-26 04:12:01.304686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.665 [2024-11-26 04:12:01.304706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.665 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.665 [2024-11-26 04:12:01.316688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.665 [2024-11-26 04:12:01.316723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.665 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.665 [2024-11-26 04:12:01.328691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.665 [2024-11-26 04:12:01.328725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.665 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.665 [2024-11-26 04:12:01.336688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.665 [2024-11-26 04:12:01.336717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.665 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.665 [2024-11-26 04:12:01.348694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.665 [2024-11-26 04:12:01.348722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.665 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.665 [2024-11-26 04:12:01.360700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.665 [2024-11-26 04:12:01.360729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.665 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.665 [2024-11-26 04:12:01.372705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.665 [2024-11-26 04:12:01.372751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.665 2024/11/26 04:12:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.665 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (86424) - No such process 00:15:59.665 04:12:01 -- target/zcopy.sh@49 -- # wait 86424 00:15:59.665 04:12:01 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:59.665 04:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.665 04:12:01 -- common/autotest_common.sh@10 -- # set +x 00:15:59.665 04:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.665 04:12:01 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:59.665 04:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.665 04:12:01 -- common/autotest_common.sh@10 -- # set +x 00:15:59.665 delay0 00:15:59.665 04:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.665 04:12:01 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:59.665 04:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.665 04:12:01 -- common/autotest_common.sh@10 -- # set +x 00:15:59.665 04:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.665 04:12:01 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:59.924 [2024-11-26 04:12:01.569292] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:06.488 Initializing NVMe Controllers 00:16:06.489 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:06.489 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:06.489 Initialization complete. Launching workers. 00:16:06.489 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 103 00:16:06.489 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 390, failed to submit 33 00:16:06.489 success 206, unsuccess 184, failed 0 00:16:06.489 04:12:07 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:06.489 04:12:07 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:06.489 04:12:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:06.489 04:12:07 -- nvmf/common.sh@116 -- # sync 00:16:06.489 04:12:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:06.489 04:12:07 -- nvmf/common.sh@119 -- # set +e 00:16:06.489 04:12:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:06.489 04:12:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:06.489 rmmod nvme_tcp 00:16:06.489 rmmod nvme_fabrics 00:16:06.489 rmmod nvme_keyring 00:16:06.489 04:12:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:06.489 04:12:07 -- nvmf/common.sh@123 -- # set -e 00:16:06.489 04:12:07 -- nvmf/common.sh@124 -- # return 0 00:16:06.489 04:12:07 -- nvmf/common.sh@477 -- # '[' -n 86250 ']' 00:16:06.489 04:12:07 -- nvmf/common.sh@478 -- # killprocess 86250 00:16:06.489 04:12:07 -- common/autotest_common.sh@936 -- # '[' -z 86250 ']' 00:16:06.489 04:12:07 -- common/autotest_common.sh@940 -- # kill -0 86250 00:16:06.489 04:12:07 -- common/autotest_common.sh@941 -- # uname 00:16:06.489 04:12:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:06.489 04:12:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86250 00:16:06.489 04:12:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:06.489 killing process with pid 86250 00:16:06.489 04:12:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:06.489 04:12:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86250' 00:16:06.489 04:12:07 -- common/autotest_common.sh@955 -- # kill 86250 00:16:06.489 04:12:07 -- common/autotest_common.sh@960 -- # wait 86250 00:16:06.489 04:12:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:06.489 04:12:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:06.489 04:12:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:06.489 04:12:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:06.489 04:12:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:06.489 04:12:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.489 04:12:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.489 04:12:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.489 04:12:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:06.489 00:16:06.489 real 0m24.898s 00:16:06.489 user 0m39.014s 00:16:06.489 sys 0m7.386s 00:16:06.489 04:12:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:06.489 ************************************ 00:16:06.489 END TEST nvmf_zcopy 00:16:06.489 ************************************ 00:16:06.489 04:12:08 -- common/autotest_common.sh@10 -- # set +x 00:16:06.489 04:12:08 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:06.489 04:12:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:06.489 04:12:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:06.489 04:12:08 -- common/autotest_common.sh@10 -- # set +x 00:16:06.489 ************************************ 00:16:06.489 START TEST nvmf_nmic 00:16:06.489 ************************************ 00:16:06.489 04:12:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:06.489 * Looking for test storage... 00:16:06.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:06.489 04:12:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:06.489 04:12:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:06.489 04:12:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:06.489 04:12:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:06.489 04:12:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:06.489 04:12:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:06.489 04:12:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:06.489 04:12:08 -- scripts/common.sh@335 -- # IFS=.-: 00:16:06.489 04:12:08 -- scripts/common.sh@335 -- # read -ra ver1 00:16:06.489 04:12:08 -- scripts/common.sh@336 -- # IFS=.-: 00:16:06.489 04:12:08 -- scripts/common.sh@336 -- # read -ra ver2 00:16:06.489 04:12:08 -- scripts/common.sh@337 -- # local 'op=<' 00:16:06.489 04:12:08 -- scripts/common.sh@339 -- # ver1_l=2 00:16:06.489 04:12:08 -- scripts/common.sh@340 -- # ver2_l=1 00:16:06.489 04:12:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:06.489 04:12:08 -- scripts/common.sh@343 -- # case "$op" in 00:16:06.489 04:12:08 -- scripts/common.sh@344 -- # : 1 00:16:06.489 04:12:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:06.489 04:12:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:06.489 04:12:08 -- scripts/common.sh@364 -- # decimal 1 00:16:06.489 04:12:08 -- scripts/common.sh@352 -- # local d=1 00:16:06.489 04:12:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:06.489 04:12:08 -- scripts/common.sh@354 -- # echo 1 00:16:06.489 04:12:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:06.489 04:12:08 -- scripts/common.sh@365 -- # decimal 2 00:16:06.489 04:12:08 -- scripts/common.sh@352 -- # local d=2 00:16:06.489 04:12:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:06.489 04:12:08 -- scripts/common.sh@354 -- # echo 2 00:16:06.489 04:12:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:06.489 04:12:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:06.489 04:12:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:06.489 04:12:08 -- scripts/common.sh@367 -- # return 0 00:16:06.489 04:12:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:06.489 04:12:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:06.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.489 --rc genhtml_branch_coverage=1 00:16:06.489 --rc genhtml_function_coverage=1 00:16:06.489 --rc genhtml_legend=1 00:16:06.489 --rc geninfo_all_blocks=1 00:16:06.489 --rc geninfo_unexecuted_blocks=1 00:16:06.489 00:16:06.489 ' 00:16:06.489 04:12:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:06.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.489 --rc genhtml_branch_coverage=1 00:16:06.489 --rc genhtml_function_coverage=1 00:16:06.489 --rc genhtml_legend=1 00:16:06.489 --rc geninfo_all_blocks=1 00:16:06.489 --rc geninfo_unexecuted_blocks=1 00:16:06.489 00:16:06.489 ' 00:16:06.489 04:12:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:06.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.489 --rc genhtml_branch_coverage=1 00:16:06.489 --rc genhtml_function_coverage=1 00:16:06.489 --rc genhtml_legend=1 00:16:06.489 --rc geninfo_all_blocks=1 00:16:06.489 --rc geninfo_unexecuted_blocks=1 00:16:06.489 00:16:06.489 ' 00:16:06.489 04:12:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:06.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.489 --rc genhtml_branch_coverage=1 00:16:06.489 --rc genhtml_function_coverage=1 00:16:06.489 --rc genhtml_legend=1 00:16:06.489 --rc geninfo_all_blocks=1 00:16:06.489 --rc geninfo_unexecuted_blocks=1 00:16:06.489 00:16:06.489 ' 00:16:06.489 04:12:08 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:06.489 04:12:08 -- nvmf/common.sh@7 -- # uname -s 00:16:06.489 04:12:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.489 04:12:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.489 04:12:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.489 04:12:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.489 04:12:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.489 04:12:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.489 04:12:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.489 04:12:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.489 04:12:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.489 04:12:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.489 04:12:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:16:06.489 04:12:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:16:06.489 04:12:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.489 04:12:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.489 04:12:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:06.489 04:12:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:06.489 04:12:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.489 04:12:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.489 04:12:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.489 04:12:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.489 04:12:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.489 04:12:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.490 04:12:08 -- paths/export.sh@5 -- # export PATH 00:16:06.490 04:12:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.490 04:12:08 -- nvmf/common.sh@46 -- # : 0 00:16:06.490 04:12:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:06.490 04:12:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:06.490 04:12:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:06.490 04:12:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.490 04:12:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.490 04:12:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:06.490 04:12:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:06.490 04:12:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:06.490 04:12:08 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:06.490 04:12:08 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:06.490 04:12:08 -- target/nmic.sh@14 -- # nvmftestinit 00:16:06.490 04:12:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:06.490 04:12:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.490 04:12:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:06.490 04:12:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:06.490 04:12:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:06.490 04:12:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.490 04:12:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.490 04:12:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.490 04:12:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:06.490 04:12:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:06.490 04:12:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:06.490 04:12:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:06.490 04:12:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:06.490 04:12:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:06.490 04:12:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.490 04:12:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:06.490 04:12:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:06.490 04:12:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:06.490 04:12:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:06.490 04:12:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:06.490 04:12:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:06.490 04:12:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.490 04:12:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:06.490 04:12:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:06.490 04:12:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:06.490 04:12:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:06.490 04:12:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:06.747 04:12:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:06.747 Cannot find device "nvmf_tgt_br" 00:16:06.747 04:12:08 -- nvmf/common.sh@154 -- # true 00:16:06.747 04:12:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:06.747 Cannot find device "nvmf_tgt_br2" 00:16:06.747 04:12:08 -- nvmf/common.sh@155 -- # true 00:16:06.747 04:12:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:06.747 04:12:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:06.747 Cannot find device "nvmf_tgt_br" 00:16:06.747 04:12:08 -- nvmf/common.sh@157 -- # true 00:16:06.747 04:12:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:06.747 Cannot find device "nvmf_tgt_br2" 00:16:06.747 04:12:08 -- nvmf/common.sh@158 -- # true 00:16:06.747 04:12:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:06.747 04:12:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:06.747 04:12:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:06.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:06.747 04:12:08 -- nvmf/common.sh@161 -- # true 00:16:06.747 04:12:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:06.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:06.747 04:12:08 -- nvmf/common.sh@162 -- # true 00:16:06.747 04:12:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:06.747 04:12:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:06.747 04:12:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:06.747 04:12:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:06.747 04:12:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:06.747 04:12:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:06.747 04:12:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:06.747 04:12:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:06.747 04:12:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:06.747 04:12:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:06.747 04:12:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:06.747 04:12:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:06.747 04:12:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:06.747 04:12:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:07.005 04:12:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:07.005 04:12:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:07.005 04:12:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:07.005 04:12:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:07.005 04:12:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:07.005 04:12:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:07.005 04:12:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:07.005 04:12:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:07.005 04:12:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:07.005 04:12:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:07.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:07.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:16:07.005 00:16:07.005 --- 10.0.0.2 ping statistics --- 00:16:07.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.005 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:16:07.005 04:12:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:07.005 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:07.005 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:16:07.005 00:16:07.006 --- 10.0.0.3 ping statistics --- 00:16:07.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.006 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:16:07.006 04:12:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:07.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:07.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:16:07.006 00:16:07.006 --- 10.0.0.1 ping statistics --- 00:16:07.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.006 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:16:07.006 04:12:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:07.006 04:12:08 -- nvmf/common.sh@421 -- # return 0 00:16:07.006 04:12:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:07.006 04:12:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:07.006 04:12:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:07.006 04:12:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:07.006 04:12:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:07.006 04:12:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:07.006 04:12:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:07.006 04:12:08 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:07.006 04:12:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:07.006 04:12:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:07.006 04:12:08 -- common/autotest_common.sh@10 -- # set +x 00:16:07.006 04:12:08 -- nvmf/common.sh@469 -- # nvmfpid=86747 00:16:07.006 04:12:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:07.006 04:12:08 -- nvmf/common.sh@470 -- # waitforlisten 86747 00:16:07.006 04:12:08 -- common/autotest_common.sh@829 -- # '[' -z 86747 ']' 00:16:07.006 04:12:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.006 04:12:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:07.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.006 04:12:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.006 04:12:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:07.006 04:12:08 -- common/autotest_common.sh@10 -- # set +x 00:16:07.006 [2024-11-26 04:12:08.679753] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:07.006 [2024-11-26 04:12:08.679838] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.264 [2024-11-26 04:12:08.819736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:07.264 [2024-11-26 04:12:08.901234] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:07.264 [2024-11-26 04:12:08.901355] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.264 [2024-11-26 04:12:08.901367] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.264 [2024-11-26 04:12:08.901375] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.264 [2024-11-26 04:12:08.901471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.264 [2024-11-26 04:12:08.901548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.264 [2024-11-26 04:12:08.902318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:07.264 [2024-11-26 04:12:08.902348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.198 04:12:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:08.198 04:12:09 -- common/autotest_common.sh@862 -- # return 0 00:16:08.198 04:12:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:08.198 04:12:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:08.198 04:12:09 -- common/autotest_common.sh@10 -- # set +x 00:16:08.198 04:12:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.198 04:12:09 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:08.198 04:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.198 04:12:09 -- common/autotest_common.sh@10 -- # set +x 00:16:08.198 [2024-11-26 04:12:09.753786] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.198 04:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.198 04:12:09 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:08.198 04:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.198 04:12:09 -- common/autotest_common.sh@10 -- # set +x 00:16:08.198 Malloc0 00:16:08.198 04:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.198 04:12:09 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:08.198 04:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.198 04:12:09 -- common/autotest_common.sh@10 -- # set +x 00:16:08.198 04:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.198 04:12:09 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:08.198 04:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.198 04:12:09 -- common/autotest_common.sh@10 -- # set +x 00:16:08.198 04:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.198 04:12:09 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:08.198 04:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.198 04:12:09 -- common/autotest_common.sh@10 -- # set +x 00:16:08.198 [2024-11-26 04:12:09.827251] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:08.198 test case1: single bdev can't be used in multiple subsystems 00:16:08.198 04:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.198 04:12:09 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:08.198 04:12:09 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:08.198 04:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.198 04:12:09 -- common/autotest_common.sh@10 -- # set +x 00:16:08.198 04:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.198 04:12:09 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:08.198 04:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.198 04:12:09 -- common/autotest_common.sh@10 -- # set +x 00:16:08.198 04:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.198 04:12:09 -- target/nmic.sh@28 -- # nmic_status=0 00:16:08.198 04:12:09 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:08.198 04:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.198 04:12:09 -- common/autotest_common.sh@10 -- # set +x 00:16:08.198 [2024-11-26 04:12:09.851166] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:08.198 [2024-11-26 04:12:09.851212] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:08.198 [2024-11-26 04:12:09.851221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.198 2024/11/26 04:12:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.198 request: 00:16:08.198 { 00:16:08.198 "method": "nvmf_subsystem_add_ns", 00:16:08.198 "params": { 00:16:08.198 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:08.198 "namespace": { 00:16:08.198 "bdev_name": "Malloc0" 00:16:08.198 } 00:16:08.198 } 00:16:08.198 } 00:16:08.198 Got JSON-RPC error response 00:16:08.199 GoRPCClient: error on JSON-RPC call 00:16:08.199 Adding namespace failed - expected result. 00:16:08.199 test case2: host connect to nvmf target in multiple paths 00:16:08.199 04:12:09 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:08.199 04:12:09 -- target/nmic.sh@29 -- # nmic_status=1 00:16:08.199 04:12:09 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:08.199 04:12:09 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:08.199 04:12:09 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:08.199 04:12:09 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:08.199 04:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.199 04:12:09 -- common/autotest_common.sh@10 -- # set +x 00:16:08.199 [2024-11-26 04:12:09.863236] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:08.199 04:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.199 04:12:09 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:08.458 04:12:10 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:08.458 04:12:10 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:08.458 04:12:10 -- common/autotest_common.sh@1187 -- # local i=0 00:16:08.458 04:12:10 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:08.458 04:12:10 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:16:08.458 04:12:10 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:10.992 04:12:12 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:10.992 04:12:12 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:10.992 04:12:12 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:10.992 04:12:12 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:16:10.992 04:12:12 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:10.992 04:12:12 -- common/autotest_common.sh@1197 -- # return 0 00:16:10.992 04:12:12 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:10.992 [global] 00:16:10.992 thread=1 00:16:10.992 invalidate=1 00:16:10.992 rw=write 00:16:10.992 time_based=1 00:16:10.992 runtime=1 00:16:10.992 ioengine=libaio 00:16:10.992 direct=1 00:16:10.992 bs=4096 00:16:10.992 iodepth=1 00:16:10.992 norandommap=0 00:16:10.992 numjobs=1 00:16:10.992 00:16:10.992 verify_dump=1 00:16:10.992 verify_backlog=512 00:16:10.992 verify_state_save=0 00:16:10.992 do_verify=1 00:16:10.992 verify=crc32c-intel 00:16:10.992 [job0] 00:16:10.992 filename=/dev/nvme0n1 00:16:10.992 Could not set queue depth (nvme0n1) 00:16:10.992 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.992 fio-3.35 00:16:10.992 Starting 1 thread 00:16:11.929 00:16:11.929 job0: (groupid=0, jobs=1): err= 0: pid=86861: Tue Nov 26 04:12:13 2024 00:16:11.929 read: IOPS=3165, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1001msec) 00:16:11.929 slat (nsec): min=10498, max=64082, avg=13318.96, stdev=4598.89 00:16:11.929 clat (usec): min=116, max=513, avg=152.97, stdev=19.90 00:16:11.929 lat (usec): min=127, max=526, avg=166.29, stdev=20.73 00:16:11.929 clat percentiles (usec): 00:16:11.929 | 1.00th=[ 124], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 137], 00:16:11.929 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 155], 00:16:11.929 | 70.00th=[ 159], 80.00th=[ 169], 90.00th=[ 180], 95.00th=[ 190], 00:16:11.929 | 99.00th=[ 206], 99.50th=[ 215], 99.90th=[ 243], 99.95th=[ 289], 00:16:11.929 | 99.99th=[ 515] 00:16:11.929 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:16:11.929 slat (usec): min=16, max=135, avg=21.38, stdev= 7.33 00:16:11.929 clat (usec): min=82, max=189, avg=107.77, stdev=15.75 00:16:11.929 lat (usec): min=99, max=275, avg=129.15, stdev=18.30 00:16:11.929 clat percentiles (usec): 00:16:11.929 | 1.00th=[ 87], 5.00th=[ 90], 10.00th=[ 92], 20.00th=[ 96], 00:16:11.929 | 30.00th=[ 98], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 108], 00:16:11.929 | 70.00th=[ 113], 80.00th=[ 120], 90.00th=[ 133], 95.00th=[ 141], 00:16:11.929 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 178], 99.95th=[ 182], 00:16:11.929 | 99.99th=[ 190] 00:16:11.929 bw ( KiB/s): min=14080, max=14080, per=98.31%, avg=14080.00, stdev= 0.00, samples=1 00:16:11.929 iops : min= 3520, max= 3520, avg=3520.00, stdev= 0.00, samples=1 00:16:11.929 lat (usec) : 100=19.55%, 250=80.42%, 500=0.01%, 750=0.01% 00:16:11.929 cpu : usr=2.00%, sys=8.80%, ctx=6753, majf=0, minf=5 00:16:11.929 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:11.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.929 issued rwts: total=3169,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.929 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:11.929 00:16:11.929 Run status group 0 (all jobs): 00:16:11.929 READ: bw=12.4MiB/s (13.0MB/s), 12.4MiB/s-12.4MiB/s (13.0MB/s-13.0MB/s), io=12.4MiB (13.0MB), run=1001-1001msec 00:16:11.929 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:16:11.929 00:16:11.929 Disk stats (read/write): 00:16:11.929 nvme0n1: ios=3005/3072, merge=0/0, ticks=512/388, in_queue=900, util=91.38% 00:16:11.929 04:12:13 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:11.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:11.929 04:12:13 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:11.929 04:12:13 -- common/autotest_common.sh@1208 -- # local i=0 00:16:11.929 04:12:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:11.929 04:12:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:11.929 04:12:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:11.929 04:12:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:11.929 04:12:13 -- common/autotest_common.sh@1220 -- # return 0 00:16:11.929 04:12:13 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:11.929 04:12:13 -- target/nmic.sh@53 -- # nvmftestfini 00:16:11.929 04:12:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:11.929 04:12:13 -- nvmf/common.sh@116 -- # sync 00:16:12.189 04:12:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:12.189 04:12:13 -- nvmf/common.sh@119 -- # set +e 00:16:12.189 04:12:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:12.189 04:12:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:12.189 rmmod nvme_tcp 00:16:12.189 rmmod nvme_fabrics 00:16:12.189 rmmod nvme_keyring 00:16:12.189 04:12:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:12.189 04:12:13 -- nvmf/common.sh@123 -- # set -e 00:16:12.189 04:12:13 -- nvmf/common.sh@124 -- # return 0 00:16:12.189 04:12:13 -- nvmf/common.sh@477 -- # '[' -n 86747 ']' 00:16:12.189 04:12:13 -- nvmf/common.sh@478 -- # killprocess 86747 00:16:12.189 04:12:13 -- common/autotest_common.sh@936 -- # '[' -z 86747 ']' 00:16:12.189 04:12:13 -- common/autotest_common.sh@940 -- # kill -0 86747 00:16:12.189 04:12:13 -- common/autotest_common.sh@941 -- # uname 00:16:12.189 04:12:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:12.189 04:12:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86747 00:16:12.189 killing process with pid 86747 00:16:12.189 04:12:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:12.189 04:12:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:12.189 04:12:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86747' 00:16:12.189 04:12:13 -- common/autotest_common.sh@955 -- # kill 86747 00:16:12.189 04:12:13 -- common/autotest_common.sh@960 -- # wait 86747 00:16:12.448 04:12:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:12.448 04:12:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:12.448 04:12:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:12.448 04:12:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:12.448 04:12:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:12.448 04:12:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.448 04:12:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.448 04:12:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.448 04:12:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:12.448 00:16:12.448 real 0m6.077s 00:16:12.448 user 0m20.442s 00:16:12.448 sys 0m1.335s 00:16:12.448 04:12:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:12.448 04:12:14 -- common/autotest_common.sh@10 -- # set +x 00:16:12.448 ************************************ 00:16:12.448 END TEST nvmf_nmic 00:16:12.448 ************************************ 00:16:12.448 04:12:14 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:12.448 04:12:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:12.448 04:12:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:12.448 04:12:14 -- common/autotest_common.sh@10 -- # set +x 00:16:12.448 ************************************ 00:16:12.448 START TEST nvmf_fio_target 00:16:12.448 ************************************ 00:16:12.448 04:12:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:12.707 * Looking for test storage... 00:16:12.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:12.707 04:12:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:12.707 04:12:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:12.707 04:12:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:12.707 04:12:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:12.707 04:12:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:12.707 04:12:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:12.707 04:12:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:12.707 04:12:14 -- scripts/common.sh@335 -- # IFS=.-: 00:16:12.707 04:12:14 -- scripts/common.sh@335 -- # read -ra ver1 00:16:12.707 04:12:14 -- scripts/common.sh@336 -- # IFS=.-: 00:16:12.707 04:12:14 -- scripts/common.sh@336 -- # read -ra ver2 00:16:12.707 04:12:14 -- scripts/common.sh@337 -- # local 'op=<' 00:16:12.707 04:12:14 -- scripts/common.sh@339 -- # ver1_l=2 00:16:12.707 04:12:14 -- scripts/common.sh@340 -- # ver2_l=1 00:16:12.707 04:12:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:12.707 04:12:14 -- scripts/common.sh@343 -- # case "$op" in 00:16:12.707 04:12:14 -- scripts/common.sh@344 -- # : 1 00:16:12.707 04:12:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:12.707 04:12:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:12.707 04:12:14 -- scripts/common.sh@364 -- # decimal 1 00:16:12.707 04:12:14 -- scripts/common.sh@352 -- # local d=1 00:16:12.707 04:12:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:12.707 04:12:14 -- scripts/common.sh@354 -- # echo 1 00:16:12.707 04:12:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:12.707 04:12:14 -- scripts/common.sh@365 -- # decimal 2 00:16:12.707 04:12:14 -- scripts/common.sh@352 -- # local d=2 00:16:12.708 04:12:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:12.708 04:12:14 -- scripts/common.sh@354 -- # echo 2 00:16:12.708 04:12:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:12.708 04:12:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:12.708 04:12:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:12.708 04:12:14 -- scripts/common.sh@367 -- # return 0 00:16:12.708 04:12:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:12.708 04:12:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:12.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.708 --rc genhtml_branch_coverage=1 00:16:12.708 --rc genhtml_function_coverage=1 00:16:12.708 --rc genhtml_legend=1 00:16:12.708 --rc geninfo_all_blocks=1 00:16:12.708 --rc geninfo_unexecuted_blocks=1 00:16:12.708 00:16:12.708 ' 00:16:12.708 04:12:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:12.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.708 --rc genhtml_branch_coverage=1 00:16:12.708 --rc genhtml_function_coverage=1 00:16:12.708 --rc genhtml_legend=1 00:16:12.708 --rc geninfo_all_blocks=1 00:16:12.708 --rc geninfo_unexecuted_blocks=1 00:16:12.708 00:16:12.708 ' 00:16:12.708 04:12:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:12.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.708 --rc genhtml_branch_coverage=1 00:16:12.708 --rc genhtml_function_coverage=1 00:16:12.708 --rc genhtml_legend=1 00:16:12.708 --rc geninfo_all_blocks=1 00:16:12.708 --rc geninfo_unexecuted_blocks=1 00:16:12.708 00:16:12.708 ' 00:16:12.708 04:12:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:12.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.708 --rc genhtml_branch_coverage=1 00:16:12.708 --rc genhtml_function_coverage=1 00:16:12.708 --rc genhtml_legend=1 00:16:12.708 --rc geninfo_all_blocks=1 00:16:12.708 --rc geninfo_unexecuted_blocks=1 00:16:12.708 00:16:12.708 ' 00:16:12.708 04:12:14 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:12.708 04:12:14 -- nvmf/common.sh@7 -- # uname -s 00:16:12.708 04:12:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.708 04:12:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.708 04:12:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.708 04:12:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.708 04:12:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.708 04:12:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.708 04:12:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.708 04:12:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.708 04:12:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.708 04:12:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.708 04:12:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:16:12.708 04:12:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:16:12.708 04:12:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.708 04:12:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.708 04:12:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:12.708 04:12:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:12.708 04:12:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.708 04:12:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.708 04:12:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.708 04:12:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.708 04:12:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.708 04:12:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.708 04:12:14 -- paths/export.sh@5 -- # export PATH 00:16:12.708 04:12:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.708 04:12:14 -- nvmf/common.sh@46 -- # : 0 00:16:12.708 04:12:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:12.708 04:12:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:12.708 04:12:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:12.708 04:12:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.708 04:12:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.708 04:12:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:12.708 04:12:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:12.708 04:12:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:12.708 04:12:14 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:12.708 04:12:14 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:12.708 04:12:14 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:12.708 04:12:14 -- target/fio.sh@16 -- # nvmftestinit 00:16:12.708 04:12:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:12.708 04:12:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.708 04:12:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:12.708 04:12:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:12.708 04:12:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:12.708 04:12:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.708 04:12:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.708 04:12:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.708 04:12:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:12.708 04:12:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:12.708 04:12:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:12.708 04:12:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:12.708 04:12:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:12.708 04:12:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:12.708 04:12:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:12.708 04:12:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:12.708 04:12:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:12.708 04:12:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:12.708 04:12:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:12.708 04:12:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:12.708 04:12:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:12.708 04:12:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:12.708 04:12:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:12.708 04:12:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:12.708 04:12:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:12.708 04:12:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:12.708 04:12:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:12.708 04:12:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:12.708 Cannot find device "nvmf_tgt_br" 00:16:12.708 04:12:14 -- nvmf/common.sh@154 -- # true 00:16:12.708 04:12:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:12.708 Cannot find device "nvmf_tgt_br2" 00:16:12.708 04:12:14 -- nvmf/common.sh@155 -- # true 00:16:12.708 04:12:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:12.708 04:12:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:12.708 Cannot find device "nvmf_tgt_br" 00:16:12.708 04:12:14 -- nvmf/common.sh@157 -- # true 00:16:12.708 04:12:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:12.708 Cannot find device "nvmf_tgt_br2" 00:16:12.708 04:12:14 -- nvmf/common.sh@158 -- # true 00:16:12.708 04:12:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:12.968 04:12:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:12.968 04:12:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:12.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:12.968 04:12:14 -- nvmf/common.sh@161 -- # true 00:16:12.968 04:12:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:12.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:12.968 04:12:14 -- nvmf/common.sh@162 -- # true 00:16:12.968 04:12:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:12.968 04:12:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:12.968 04:12:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:12.968 04:12:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:12.968 04:12:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:12.968 04:12:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:12.968 04:12:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:12.968 04:12:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:12.968 04:12:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:12.968 04:12:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:12.968 04:12:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:12.968 04:12:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:12.968 04:12:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:12.968 04:12:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:12.968 04:12:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:12.968 04:12:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:12.968 04:12:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:12.968 04:12:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:12.968 04:12:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:12.968 04:12:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:12.968 04:12:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:12.968 04:12:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:12.968 04:12:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:12.968 04:12:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:12.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:12.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:16:12.968 00:16:12.968 --- 10.0.0.2 ping statistics --- 00:16:12.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.968 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:16:12.968 04:12:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:12.968 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:12.968 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:16:12.968 00:16:12.968 --- 10.0.0.3 ping statistics --- 00:16:12.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.968 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:12.968 04:12:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:12.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:12.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:16:12.968 00:16:12.968 --- 10.0.0.1 ping statistics --- 00:16:12.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.968 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:16:12.968 04:12:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:12.968 04:12:14 -- nvmf/common.sh@421 -- # return 0 00:16:12.968 04:12:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:12.968 04:12:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:12.968 04:12:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:12.968 04:12:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:12.968 04:12:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:12.968 04:12:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:12.968 04:12:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:12.968 04:12:14 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:12.968 04:12:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:12.968 04:12:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:12.968 04:12:14 -- common/autotest_common.sh@10 -- # set +x 00:16:12.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.968 04:12:14 -- nvmf/common.sh@469 -- # nvmfpid=87046 00:16:12.968 04:12:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:12.968 04:12:14 -- nvmf/common.sh@470 -- # waitforlisten 87046 00:16:12.968 04:12:14 -- common/autotest_common.sh@829 -- # '[' -z 87046 ']' 00:16:12.968 04:12:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.968 04:12:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:12.968 04:12:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.968 04:12:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:13.227 04:12:14 -- common/autotest_common.sh@10 -- # set +x 00:16:13.227 [2024-11-26 04:12:14.786041] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:13.227 [2024-11-26 04:12:14.786126] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.227 [2024-11-26 04:12:14.930873] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:13.486 [2024-11-26 04:12:15.017295] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:13.486 [2024-11-26 04:12:15.017489] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.486 [2024-11-26 04:12:15.017507] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.486 [2024-11-26 04:12:15.017518] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.486 [2024-11-26 04:12:15.017678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.486 [2024-11-26 04:12:15.017763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:13.486 [2024-11-26 04:12:15.017976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:13.486 [2024-11-26 04:12:15.017997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.054 04:12:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.054 04:12:15 -- common/autotest_common.sh@862 -- # return 0 00:16:14.054 04:12:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:14.054 04:12:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:14.054 04:12:15 -- common/autotest_common.sh@10 -- # set +x 00:16:14.054 04:12:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.054 04:12:15 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:14.312 [2024-11-26 04:12:15.980066] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.313 04:12:16 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:14.880 04:12:16 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:14.880 04:12:16 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:15.140 04:12:16 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:15.140 04:12:16 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:15.399 04:12:16 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:15.399 04:12:16 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:15.658 04:12:17 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:15.658 04:12:17 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:15.917 04:12:17 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:16.176 04:12:17 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:16.176 04:12:17 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:16.435 04:12:17 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:16.435 04:12:17 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:16.694 04:12:18 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:16.694 04:12:18 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:16.694 04:12:18 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:16.952 04:12:18 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:16.952 04:12:18 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:17.211 04:12:18 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:17.211 04:12:18 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:17.470 04:12:19 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.728 [2024-11-26 04:12:19.397006] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.728 04:12:19 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:17.986 04:12:19 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:18.246 04:12:19 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:18.505 04:12:20 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:18.505 04:12:20 -- common/autotest_common.sh@1187 -- # local i=0 00:16:18.505 04:12:20 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:18.505 04:12:20 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:16:18.505 04:12:20 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:16:18.505 04:12:20 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:20.410 04:12:22 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:20.410 04:12:22 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:20.410 04:12:22 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:20.410 04:12:22 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:16:20.410 04:12:22 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:20.410 04:12:22 -- common/autotest_common.sh@1197 -- # return 0 00:16:20.410 04:12:22 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:20.410 [global] 00:16:20.410 thread=1 00:16:20.410 invalidate=1 00:16:20.410 rw=write 00:16:20.410 time_based=1 00:16:20.410 runtime=1 00:16:20.410 ioengine=libaio 00:16:20.410 direct=1 00:16:20.410 bs=4096 00:16:20.410 iodepth=1 00:16:20.410 norandommap=0 00:16:20.410 numjobs=1 00:16:20.410 00:16:20.410 verify_dump=1 00:16:20.410 verify_backlog=512 00:16:20.410 verify_state_save=0 00:16:20.410 do_verify=1 00:16:20.410 verify=crc32c-intel 00:16:20.410 [job0] 00:16:20.410 filename=/dev/nvme0n1 00:16:20.410 [job1] 00:16:20.410 filename=/dev/nvme0n2 00:16:20.410 [job2] 00:16:20.410 filename=/dev/nvme0n3 00:16:20.410 [job3] 00:16:20.410 filename=/dev/nvme0n4 00:16:20.410 Could not set queue depth (nvme0n1) 00:16:20.410 Could not set queue depth (nvme0n2) 00:16:20.410 Could not set queue depth (nvme0n3) 00:16:20.410 Could not set queue depth (nvme0n4) 00:16:20.668 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:20.668 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:20.668 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:20.669 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:20.669 fio-3.35 00:16:20.669 Starting 4 threads 00:16:22.048 00:16:22.048 job0: (groupid=0, jobs=1): err= 0: pid=87340: Tue Nov 26 04:12:23 2024 00:16:22.048 read: IOPS=2430, BW=9722KiB/s (9956kB/s)(9732KiB/1001msec) 00:16:22.048 slat (nsec): min=10353, max=65155, avg=13412.17, stdev=3095.57 00:16:22.048 clat (usec): min=135, max=675, avg=196.06, stdev=30.45 00:16:22.048 lat (usec): min=147, max=687, avg=209.47, stdev=30.98 00:16:22.048 clat percentiles (usec): 00:16:22.048 | 1.00th=[ 147], 5.00th=[ 157], 10.00th=[ 167], 20.00th=[ 178], 00:16:22.048 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:16:22.048 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 221], 95.00th=[ 235], 00:16:22.048 | 99.00th=[ 326], 99.50th=[ 343], 99.90th=[ 515], 99.95th=[ 545], 00:16:22.048 | 99.99th=[ 676] 00:16:22.048 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:22.048 slat (usec): min=16, max=120, avg=21.34, stdev= 5.68 00:16:22.048 clat (usec): min=124, max=2126, avg=167.35, stdev=43.72 00:16:22.048 lat (usec): min=145, max=2150, avg=188.69, stdev=44.55 00:16:22.048 clat percentiles (usec): 00:16:22.048 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:16:22.048 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 169], 00:16:22.048 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 202], 00:16:22.048 | 99.00th=[ 229], 99.50th=[ 245], 99.90th=[ 293], 99.95th=[ 445], 00:16:22.048 | 99.99th=[ 2114] 00:16:22.048 bw ( KiB/s): min=11552, max=11552, per=34.74%, avg=11552.00, stdev= 0.00, samples=1 00:16:22.048 iops : min= 2888, max= 2888, avg=2888.00, stdev= 0.00, samples=1 00:16:22.048 lat (usec) : 250=98.54%, 500=1.38%, 750=0.06% 00:16:22.048 lat (msec) : 4=0.02% 00:16:22.048 cpu : usr=1.70%, sys=6.40%, ctx=4997, majf=0, minf=13 00:16:22.048 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:22.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.048 issued rwts: total=2433,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.048 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:22.048 job1: (groupid=0, jobs=1): err= 0: pid=87341: Tue Nov 26 04:12:23 2024 00:16:22.048 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:22.048 slat (nsec): min=14675, max=75998, avg=20104.10, stdev=6074.04 00:16:22.048 clat (usec): min=164, max=602, avg=317.77, stdev=51.31 00:16:22.048 lat (usec): min=180, max=618, avg=337.87, stdev=53.34 00:16:22.048 clat percentiles (usec): 00:16:22.048 | 1.00th=[ 210], 5.00th=[ 247], 10.00th=[ 255], 20.00th=[ 269], 00:16:22.048 | 30.00th=[ 285], 40.00th=[ 310], 50.00th=[ 322], 60.00th=[ 334], 00:16:22.048 | 70.00th=[ 343], 80.00th=[ 355], 90.00th=[ 371], 95.00th=[ 404], 00:16:22.048 | 99.00th=[ 465], 99.50th=[ 502], 99.90th=[ 594], 99.95th=[ 603], 00:16:22.048 | 99.99th=[ 603] 00:16:22.048 write: IOPS=1663, BW=6653KiB/s (6813kB/s)(6660KiB/1001msec); 0 zone resets 00:16:22.048 slat (nsec): min=22683, max=93121, avg=35000.73, stdev=8206.63 00:16:22.048 clat (usec): min=102, max=491, avg=249.07, stdev=58.46 00:16:22.048 lat (usec): min=125, max=530, avg=284.07, stdev=61.83 00:16:22.048 clat percentiles (usec): 00:16:22.048 | 1.00th=[ 113], 5.00th=[ 151], 10.00th=[ 186], 20.00th=[ 200], 00:16:22.048 | 30.00th=[ 217], 40.00th=[ 235], 50.00th=[ 251], 60.00th=[ 265], 00:16:22.048 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 318], 95.00th=[ 355], 00:16:22.048 | 99.00th=[ 412], 99.50th=[ 420], 99.90th=[ 445], 99.95th=[ 494], 00:16:22.048 | 99.99th=[ 494] 00:16:22.048 bw ( KiB/s): min= 7928, max= 7928, per=23.84%, avg=7928.00, stdev= 0.00, samples=1 00:16:22.048 iops : min= 1982, max= 1982, avg=1982.00, stdev= 0.00, samples=1 00:16:22.048 lat (usec) : 250=29.12%, 500=70.63%, 750=0.25% 00:16:22.048 cpu : usr=1.40%, sys=7.10%, ctx=3201, majf=0, minf=8 00:16:22.048 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:22.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.048 issued rwts: total=1536,1665,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.048 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:22.048 job2: (groupid=0, jobs=1): err= 0: pid=87342: Tue Nov 26 04:12:23 2024 00:16:22.048 read: IOPS=1497, BW=5990KiB/s (6134kB/s)(5996KiB/1001msec) 00:16:22.048 slat (nsec): min=16394, max=63794, avg=23276.15, stdev=6868.18 00:16:22.048 clat (usec): min=195, max=704, avg=333.03, stdev=62.45 00:16:22.048 lat (usec): min=245, max=729, avg=356.31, stdev=64.43 00:16:22.048 clat percentiles (usec): 00:16:22.048 | 1.00th=[ 245], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 281], 00:16:22.048 | 30.00th=[ 297], 40.00th=[ 314], 50.00th=[ 326], 60.00th=[ 338], 00:16:22.048 | 70.00th=[ 347], 80.00th=[ 363], 90.00th=[ 416], 95.00th=[ 457], 00:16:22.048 | 99.00th=[ 553], 99.50th=[ 578], 99.90th=[ 635], 99.95th=[ 701], 00:16:22.048 | 99.99th=[ 701] 00:16:22.048 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:22.048 slat (nsec): min=23760, max=94229, avg=37747.87, stdev=8548.66 00:16:22.048 clat (usec): min=110, max=3940, avg=260.40, stdev=111.86 00:16:22.048 lat (usec): min=178, max=3992, avg=298.14, stdev=113.54 00:16:22.048 clat percentiles (usec): 00:16:22.048 | 1.00th=[ 176], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 210], 00:16:22.048 | 30.00th=[ 231], 40.00th=[ 245], 50.00th=[ 255], 60.00th=[ 265], 00:16:22.049 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 314], 95.00th=[ 355], 00:16:22.049 | 99.00th=[ 412], 99.50th=[ 429], 99.90th=[ 1500], 99.95th=[ 3949], 00:16:22.049 | 99.99th=[ 3949] 00:16:22.049 bw ( KiB/s): min= 7640, max= 7640, per=22.98%, avg=7640.00, stdev= 0.00, samples=1 00:16:22.049 iops : min= 1910, max= 1910, avg=1910.00, stdev= 0.00, samples=1 00:16:22.049 lat (usec) : 250=23.49%, 500=75.19%, 750=1.22%, 1000=0.03% 00:16:22.049 lat (msec) : 2=0.03%, 4=0.03% 00:16:22.049 cpu : usr=1.80%, sys=6.90%, ctx=3035, majf=0, minf=15 00:16:22.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:22.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.049 issued rwts: total=1499,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:22.049 job3: (groupid=0, jobs=1): err= 0: pid=87343: Tue Nov 26 04:12:23 2024 00:16:22.049 read: IOPS=2457, BW=9830KiB/s (10.1MB/s)(9840KiB/1001msec) 00:16:22.049 slat (nsec): min=12104, max=63368, avg=14754.89, stdev=3477.44 00:16:22.049 clat (usec): min=147, max=554, avg=192.79, stdev=24.22 00:16:22.049 lat (usec): min=165, max=567, avg=207.54, stdev=24.74 00:16:22.049 clat percentiles (usec): 00:16:22.049 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 176], 00:16:22.049 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 194], 00:16:22.049 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 223], 95.00th=[ 235], 00:16:22.049 | 99.00th=[ 269], 99.50th=[ 293], 99.90th=[ 367], 99.95th=[ 441], 00:16:22.049 | 99.99th=[ 553] 00:16:22.049 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:22.049 slat (nsec): min=18717, max=79401, avg=24116.60, stdev=6004.74 00:16:22.049 clat (usec): min=109, max=313, avg=164.31, stdev=27.00 00:16:22.049 lat (usec): min=129, max=345, avg=188.42, stdev=28.75 00:16:22.049 clat percentiles (usec): 00:16:22.049 | 1.00th=[ 121], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 143], 00:16:22.049 | 30.00th=[ 147], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 167], 00:16:22.049 | 70.00th=[ 176], 80.00th=[ 186], 90.00th=[ 202], 95.00th=[ 215], 00:16:22.049 | 99.00th=[ 249], 99.50th=[ 265], 99.90th=[ 293], 99.95th=[ 293], 00:16:22.049 | 99.99th=[ 314] 00:16:22.049 bw ( KiB/s): min=11512, max=11512, per=34.62%, avg=11512.00, stdev= 0.00, samples=1 00:16:22.049 iops : min= 2878, max= 2878, avg=2878.00, stdev= 0.00, samples=1 00:16:22.049 lat (usec) : 250=98.47%, 500=1.51%, 750=0.02% 00:16:22.049 cpu : usr=1.20%, sys=7.60%, ctx=5020, majf=0, minf=3 00:16:22.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:22.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.049 issued rwts: total=2460,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:22.049 00:16:22.049 Run status group 0 (all jobs): 00:16:22.049 READ: bw=30.9MiB/s (32.4MB/s), 5990KiB/s-9830KiB/s (6134kB/s-10.1MB/s), io=31.0MiB (32.5MB), run=1001-1001msec 00:16:22.049 WRITE: bw=32.5MiB/s (34.0MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=32.5MiB (34.1MB), run=1001-1001msec 00:16:22.049 00:16:22.049 Disk stats (read/write): 00:16:22.049 nvme0n1: ios=2098/2153, merge=0/0, ticks=458/388, in_queue=846, util=87.58% 00:16:22.049 nvme0n2: ios=1174/1536, merge=0/0, ticks=407/407, in_queue=814, util=88.31% 00:16:22.049 nvme0n3: ios=1031/1536, merge=0/0, ticks=375/421, in_queue=796, util=88.77% 00:16:22.049 nvme0n4: ios=2048/2177, merge=0/0, ticks=404/393, in_queue=797, util=89.54% 00:16:22.049 04:12:23 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:22.049 [global] 00:16:22.049 thread=1 00:16:22.049 invalidate=1 00:16:22.049 rw=randwrite 00:16:22.049 time_based=1 00:16:22.049 runtime=1 00:16:22.049 ioengine=libaio 00:16:22.049 direct=1 00:16:22.049 bs=4096 00:16:22.049 iodepth=1 00:16:22.049 norandommap=0 00:16:22.049 numjobs=1 00:16:22.049 00:16:22.049 verify_dump=1 00:16:22.049 verify_backlog=512 00:16:22.049 verify_state_save=0 00:16:22.049 do_verify=1 00:16:22.049 verify=crc32c-intel 00:16:22.049 [job0] 00:16:22.049 filename=/dev/nvme0n1 00:16:22.049 [job1] 00:16:22.049 filename=/dev/nvme0n2 00:16:22.049 [job2] 00:16:22.049 filename=/dev/nvme0n3 00:16:22.049 [job3] 00:16:22.049 filename=/dev/nvme0n4 00:16:22.049 Could not set queue depth (nvme0n1) 00:16:22.049 Could not set queue depth (nvme0n2) 00:16:22.049 Could not set queue depth (nvme0n3) 00:16:22.049 Could not set queue depth (nvme0n4) 00:16:22.049 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:22.049 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:22.049 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:22.049 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:22.049 fio-3.35 00:16:22.049 Starting 4 threads 00:16:23.430 00:16:23.430 job0: (groupid=0, jobs=1): err= 0: pid=87396: Tue Nov 26 04:12:24 2024 00:16:23.430 read: IOPS=1796, BW=7185KiB/s (7357kB/s)(7192KiB/1001msec) 00:16:23.430 slat (nsec): min=11237, max=58709, avg=15761.76, stdev=3944.43 00:16:23.430 clat (usec): min=134, max=2178, avg=268.50, stdev=50.04 00:16:23.430 lat (usec): min=146, max=2193, avg=284.26, stdev=50.21 00:16:23.430 clat percentiles (usec): 00:16:23.430 | 1.00th=[ 225], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 251], 00:16:23.430 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:16:23.430 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 302], 00:16:23.430 | 99.00th=[ 334], 99.50th=[ 351], 99.90th=[ 429], 99.95th=[ 2180], 00:16:23.430 | 99.99th=[ 2180] 00:16:23.430 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:23.430 slat (nsec): min=17297, max=80776, avg=24198.26, stdev=6192.57 00:16:23.430 clat (usec): min=95, max=3382, avg=211.11, stdev=118.77 00:16:23.430 lat (usec): min=119, max=3420, avg=235.31, stdev=119.77 00:16:23.430 clat percentiles (usec): 00:16:23.430 | 1.00th=[ 117], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 190], 00:16:23.430 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 208], 00:16:23.430 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 235], 95.00th=[ 247], 00:16:23.430 | 99.00th=[ 343], 99.50th=[ 494], 99.90th=[ 1926], 99.95th=[ 3261], 00:16:23.430 | 99.99th=[ 3392] 00:16:23.430 bw ( KiB/s): min= 8192, max= 8192, per=20.23%, avg=8192.00, stdev= 0.00, samples=1 00:16:23.430 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:23.430 lat (usec) : 100=0.05%, 250=59.57%, 500=40.09%, 750=0.03%, 1000=0.10% 00:16:23.430 lat (msec) : 2=0.08%, 4=0.08% 00:16:23.430 cpu : usr=1.80%, sys=5.50%, ctx=3846, majf=0, minf=19 00:16:23.430 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:23.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.430 issued rwts: total=1798,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:23.430 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:23.430 job1: (groupid=0, jobs=1): err= 0: pid=87397: Tue Nov 26 04:12:24 2024 00:16:23.430 read: IOPS=1814, BW=7257KiB/s (7431kB/s)(7264KiB/1001msec) 00:16:23.430 slat (nsec): min=12477, max=48126, avg=16142.73, stdev=4519.29 00:16:23.430 clat (usec): min=161, max=716, avg=272.40, stdev=31.51 00:16:23.430 lat (usec): min=178, max=736, avg=288.54, stdev=32.23 00:16:23.430 clat percentiles (usec): 00:16:23.430 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 253], 00:16:23.430 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:16:23.430 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 326], 00:16:23.430 | 99.00th=[ 396], 99.50th=[ 412], 99.90th=[ 562], 99.95th=[ 717], 00:16:23.430 | 99.99th=[ 717] 00:16:23.430 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:23.430 slat (usec): min=18, max=109, avg=26.32, stdev= 6.86 00:16:23.430 clat (usec): min=100, max=521, avg=202.77, stdev=29.59 00:16:23.430 lat (usec): min=131, max=547, avg=229.09, stdev=29.58 00:16:23.430 clat percentiles (usec): 00:16:23.430 | 1.00th=[ 117], 5.00th=[ 167], 10.00th=[ 178], 20.00th=[ 186], 00:16:23.430 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 206], 00:16:23.430 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 235], 95.00th=[ 247], 00:16:23.430 | 99.00th=[ 285], 99.50th=[ 318], 99.90th=[ 424], 99.95th=[ 429], 00:16:23.430 | 99.99th=[ 523] 00:16:23.430 bw ( KiB/s): min= 8192, max= 8192, per=20.23%, avg=8192.00, stdev= 0.00, samples=1 00:16:23.430 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:23.430 lat (usec) : 250=56.91%, 500=42.96%, 750=0.13% 00:16:23.430 cpu : usr=1.50%, sys=5.90%, ctx=3864, majf=0, minf=7 00:16:23.430 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:23.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.430 issued rwts: total=1816,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:23.430 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:23.430 job2: (groupid=0, jobs=1): err= 0: pid=87398: Tue Nov 26 04:12:24 2024 00:16:23.430 read: IOPS=2692, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec) 00:16:23.430 slat (nsec): min=11539, max=75015, avg=13919.69, stdev=4224.78 00:16:23.430 clat (usec): min=141, max=387, avg=174.91, stdev=16.57 00:16:23.430 lat (usec): min=153, max=402, avg=188.83, stdev=17.13 00:16:23.430 clat percentiles (usec): 00:16:23.430 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:16:23.430 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:16:23.430 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 202], 00:16:23.430 | 99.00th=[ 223], 99.50th=[ 233], 99.90th=[ 285], 99.95th=[ 318], 00:16:23.430 | 99.99th=[ 388] 00:16:23.430 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:23.430 slat (nsec): min=17602, max=92790, avg=21267.66, stdev=5802.78 00:16:23.430 clat (usec): min=102, max=1446, avg=136.02, stdev=33.27 00:16:23.430 lat (usec): min=123, max=1468, avg=157.28, stdev=33.79 00:16:23.430 clat percentiles (usec): 00:16:23.430 | 1.00th=[ 111], 5.00th=[ 116], 10.00th=[ 119], 20.00th=[ 124], 00:16:23.430 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 137], 00:16:23.430 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 163], 00:16:23.430 | 99.00th=[ 182], 99.50th=[ 198], 99.90th=[ 412], 99.95th=[ 971], 00:16:23.430 | 99.99th=[ 1450] 00:16:23.430 bw ( KiB/s): min=12288, max=12288, per=30.34%, avg=12288.00, stdev= 0.00, samples=1 00:16:23.430 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:23.430 lat (usec) : 250=99.76%, 500=0.19%, 750=0.02%, 1000=0.02% 00:16:23.430 lat (msec) : 2=0.02% 00:16:23.430 cpu : usr=1.90%, sys=7.10%, ctx=5768, majf=0, minf=13 00:16:23.430 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:23.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.430 issued rwts: total=2695,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:23.430 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:23.430 job3: (groupid=0, jobs=1): err= 0: pid=87399: Tue Nov 26 04:12:24 2024 00:16:23.430 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:16:23.430 slat (nsec): min=11573, max=52584, avg=13917.76, stdev=3895.21 00:16:23.431 clat (usec): min=141, max=328, avg=185.13, stdev=16.92 00:16:23.431 lat (usec): min=162, max=347, avg=199.05, stdev=17.26 00:16:23.431 clat percentiles (usec): 00:16:23.431 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 172], 00:16:23.431 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:16:23.431 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 208], 95.00th=[ 217], 00:16:23.431 | 99.00th=[ 231], 99.50th=[ 243], 99.90th=[ 273], 99.95th=[ 326], 00:16:23.431 | 99.99th=[ 330] 00:16:23.431 write: IOPS=2964, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1001msec); 0 zone resets 00:16:23.431 slat (usec): min=17, max=101, avg=21.94, stdev= 5.66 00:16:23.431 clat (usec): min=106, max=2442, avg=140.93, stdev=46.91 00:16:23.431 lat (usec): min=125, max=2463, avg=162.87, stdev=47.26 00:16:23.431 clat percentiles (usec): 00:16:23.431 | 1.00th=[ 114], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 127], 00:16:23.431 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 141], 00:16:23.431 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 161], 95.00th=[ 169], 00:16:23.431 | 99.00th=[ 190], 99.50th=[ 200], 99.90th=[ 469], 99.95th=[ 685], 00:16:23.431 | 99.99th=[ 2442] 00:16:23.431 bw ( KiB/s): min=12288, max=12288, per=30.34%, avg=12288.00, stdev= 0.00, samples=1 00:16:23.431 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:23.431 lat (usec) : 250=99.86%, 500=0.11%, 750=0.02% 00:16:23.431 lat (msec) : 4=0.02% 00:16:23.431 cpu : usr=1.20%, sys=7.60%, ctx=5527, majf=0, minf=5 00:16:23.431 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:23.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.431 issued rwts: total=2560,2967,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:23.431 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:23.431 00:16:23.431 Run status group 0 (all jobs): 00:16:23.431 READ: bw=34.6MiB/s (36.3MB/s), 7185KiB/s-10.5MiB/s (7357kB/s-11.0MB/s), io=34.6MiB (36.3MB), run=1001-1001msec 00:16:23.431 WRITE: bw=39.5MiB/s (41.5MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=39.6MiB (41.5MB), run=1001-1001msec 00:16:23.431 00:16:23.431 Disk stats (read/write): 00:16:23.431 nvme0n1: ios=1586/1785, merge=0/0, ticks=461/387, in_queue=848, util=87.58% 00:16:23.431 nvme0n2: ios=1580/1809, merge=0/0, ticks=452/389, in_queue=841, util=88.66% 00:16:23.431 nvme0n3: ios=2399/2560, merge=0/0, ticks=430/373, in_queue=803, util=89.16% 00:16:23.431 nvme0n4: ios=2202/2560, merge=0/0, ticks=425/404, in_queue=829, util=89.72% 00:16:23.431 04:12:24 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:23.431 [global] 00:16:23.431 thread=1 00:16:23.431 invalidate=1 00:16:23.431 rw=write 00:16:23.431 time_based=1 00:16:23.431 runtime=1 00:16:23.431 ioengine=libaio 00:16:23.431 direct=1 00:16:23.431 bs=4096 00:16:23.431 iodepth=128 00:16:23.431 norandommap=0 00:16:23.431 numjobs=1 00:16:23.431 00:16:23.431 verify_dump=1 00:16:23.431 verify_backlog=512 00:16:23.431 verify_state_save=0 00:16:23.431 do_verify=1 00:16:23.431 verify=crc32c-intel 00:16:23.431 [job0] 00:16:23.431 filename=/dev/nvme0n1 00:16:23.431 [job1] 00:16:23.431 filename=/dev/nvme0n2 00:16:23.431 [job2] 00:16:23.431 filename=/dev/nvme0n3 00:16:23.431 [job3] 00:16:23.431 filename=/dev/nvme0n4 00:16:23.431 Could not set queue depth (nvme0n1) 00:16:23.431 Could not set queue depth (nvme0n2) 00:16:23.431 Could not set queue depth (nvme0n3) 00:16:23.431 Could not set queue depth (nvme0n4) 00:16:23.431 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:23.431 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:23.431 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:23.431 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:23.431 fio-3.35 00:16:23.431 Starting 4 threads 00:16:24.867 00:16:24.867 job0: (groupid=0, jobs=1): err= 0: pid=87458: Tue Nov 26 04:12:26 2024 00:16:24.867 read: IOPS=2494, BW=9978KiB/s (10.2MB/s)(9.77MiB/1003msec) 00:16:24.867 slat (usec): min=6, max=11453, avg=187.99, stdev=989.11 00:16:24.867 clat (usec): min=893, max=49143, avg=23704.76, stdev=7511.56 00:16:24.867 lat (usec): min=5088, max=49178, avg=23892.74, stdev=7596.13 00:16:24.867 clat percentiles (usec): 00:16:24.867 | 1.00th=[ 5604], 5.00th=[15008], 10.00th=[15139], 20.00th=[15926], 00:16:24.867 | 30.00th=[18220], 40.00th=[22938], 50.00th=[24249], 60.00th=[24773], 00:16:24.867 | 70.00th=[26346], 80.00th=[28967], 90.00th=[34341], 95.00th=[38536], 00:16:24.867 | 99.00th=[41157], 99.50th=[42730], 99.90th=[46924], 99.95th=[47449], 00:16:24.867 | 99.99th=[49021] 00:16:24.867 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:16:24.867 slat (usec): min=9, max=8025, avg=198.30, stdev=855.18 00:16:24.867 clat (usec): min=15441, max=50931, avg=26125.72, stdev=8642.48 00:16:24.867 lat (usec): min=15490, max=50958, avg=26324.02, stdev=8712.67 00:16:24.867 clat percentiles (usec): 00:16:24.867 | 1.00th=[17957], 5.00th=[18220], 10.00th=[18744], 20.00th=[19530], 00:16:24.867 | 30.00th=[20579], 40.00th=[21890], 50.00th=[23200], 60.00th=[23987], 00:16:24.867 | 70.00th=[26084], 80.00th=[30278], 90.00th=[41681], 95.00th=[47973], 00:16:24.867 | 99.00th=[50594], 99.50th=[50594], 99.90th=[51119], 99.95th=[51119], 00:16:24.867 | 99.99th=[51119] 00:16:24.867 bw ( KiB/s): min= 8872, max=11631, per=14.69%, avg=10251.50, stdev=1950.91, samples=2 00:16:24.867 iops : min= 2218, max= 2907, avg=2562.50, stdev=487.20, samples=2 00:16:24.867 lat (usec) : 1000=0.02% 00:16:24.867 lat (msec) : 10=0.83%, 20=28.84%, 50=69.48%, 100=0.83% 00:16:24.867 cpu : usr=2.89%, sys=7.68%, ctx=261, majf=0, minf=9 00:16:24.867 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:24.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:24.867 issued rwts: total=2502,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.867 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:24.867 job1: (groupid=0, jobs=1): err= 0: pid=87459: Tue Nov 26 04:12:26 2024 00:16:24.867 read: IOPS=3762, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1003msec) 00:16:24.867 slat (usec): min=4, max=7600, avg=122.16, stdev=614.17 00:16:24.867 clat (usec): min=1980, max=27494, avg=15550.20, stdev=2915.35 00:16:24.867 lat (usec): min=7002, max=27579, avg=15672.35, stdev=2953.92 00:16:24.867 clat percentiles (usec): 00:16:24.867 | 1.00th=[ 9896], 5.00th=[11600], 10.00th=[12387], 20.00th=[12911], 00:16:24.867 | 30.00th=[13698], 40.00th=[14746], 50.00th=[15533], 60.00th=[16057], 00:16:24.867 | 70.00th=[16450], 80.00th=[17695], 90.00th=[19268], 95.00th=[21103], 00:16:24.867 | 99.00th=[23462], 99.50th=[23725], 99.90th=[26608], 99.95th=[26608], 00:16:24.867 | 99.99th=[27395] 00:16:24.867 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:16:24.867 slat (usec): min=14, max=7357, avg=122.78, stdev=596.83 00:16:24.867 clat (usec): min=8329, max=30097, avg=16628.52, stdev=4641.33 00:16:24.867 lat (usec): min=8361, max=30141, avg=16751.30, stdev=4686.35 00:16:24.867 clat percentiles (usec): 00:16:24.867 | 1.00th=[10552], 5.00th=[11731], 10.00th=[12125], 20.00th=[12649], 00:16:24.867 | 30.00th=[13698], 40.00th=[14484], 50.00th=[15270], 60.00th=[16188], 00:16:24.867 | 70.00th=[17695], 80.00th=[20317], 90.00th=[25297], 95.00th=[26608], 00:16:24.867 | 99.00th=[28705], 99.50th=[28705], 99.90th=[30016], 99.95th=[30016], 00:16:24.867 | 99.99th=[30016] 00:16:24.867 bw ( KiB/s): min=16384, max=16416, per=23.50%, avg=16400.00, stdev=22.63, samples=2 00:16:24.867 iops : min= 4096, max= 4104, avg=4100.00, stdev= 5.66, samples=2 00:16:24.867 lat (msec) : 2=0.01%, 10=0.69%, 20=84.78%, 50=14.52% 00:16:24.867 cpu : usr=4.39%, sys=14.07%, ctx=351, majf=0, minf=7 00:16:24.867 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:24.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:24.867 issued rwts: total=3774,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.868 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:24.868 job2: (groupid=0, jobs=1): err= 0: pid=87461: Tue Nov 26 04:12:26 2024 00:16:24.868 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:16:24.868 slat (usec): min=7, max=2853, avg=92.34, stdev=403.39 00:16:24.868 clat (usec): min=9000, max=14321, avg=12085.27, stdev=917.42 00:16:24.868 lat (usec): min=9520, max=14371, avg=12177.61, stdev=854.22 00:16:24.868 clat percentiles (usec): 00:16:24.868 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10683], 20.00th=[11600], 00:16:24.868 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:16:24.868 | 70.00th=[12649], 80.00th=[12780], 90.00th=[13173], 95.00th=[13435], 00:16:24.868 | 99.00th=[13960], 99.50th=[14091], 99.90th=[14353], 99.95th=[14353], 00:16:24.868 | 99.99th=[14353] 00:16:24.868 write: IOPS=5232, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1003msec); 0 zone resets 00:16:24.868 slat (usec): min=7, max=3919, avg=93.20, stdev=303.14 00:16:24.868 clat (usec): min=1939, max=15314, avg=12325.40, stdev=1338.89 00:16:24.868 lat (usec): min=2599, max=15337, avg=12418.60, stdev=1325.17 00:16:24.868 clat percentiles (usec): 00:16:24.868 | 1.00th=[ 7242], 5.00th=[10290], 10.00th=[10683], 20.00th=[11731], 00:16:24.868 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12387], 60.00th=[12649], 00:16:24.868 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13566], 95.00th=[14222], 00:16:24.868 | 99.00th=[15008], 99.50th=[15139], 99.90th=[15270], 99.95th=[15270], 00:16:24.868 | 99.99th=[15270] 00:16:24.868 bw ( KiB/s): min=20521, max=20552, per=29.42%, avg=20536.50, stdev=21.92, samples=2 00:16:24.868 iops : min= 5130, max= 5138, avg=5134.00, stdev= 5.66, samples=2 00:16:24.868 lat (msec) : 2=0.01%, 4=0.15%, 10=3.08%, 20=96.76% 00:16:24.868 cpu : usr=3.49%, sys=15.17%, ctx=918, majf=0, minf=9 00:16:24.868 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:24.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:24.868 issued rwts: total=5120,5248,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.868 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:24.868 job3: (groupid=0, jobs=1): err= 0: pid=87462: Tue Nov 26 04:12:26 2024 00:16:24.868 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:16:24.868 slat (usec): min=7, max=3861, avg=89.17, stdev=439.98 00:16:24.868 clat (usec): min=8502, max=15386, avg=11814.17, stdev=1132.48 00:16:24.868 lat (usec): min=8513, max=15420, avg=11903.34, stdev=1102.39 00:16:24.868 clat percentiles (usec): 00:16:24.868 | 1.00th=[ 8848], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[11338], 00:16:24.868 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:16:24.868 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12911], 95.00th=[13435], 00:16:24.868 | 99.00th=[14353], 99.50th=[14484], 99.90th=[15008], 99.95th=[15270], 00:16:24.868 | 99.99th=[15401] 00:16:24.868 write: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(21.9MiB/1003msec); 0 zone resets 00:16:24.868 slat (usec): min=10, max=4292, avg=90.74, stdev=418.53 00:16:24.868 clat (usec): min=338, max=15211, avg=11837.94, stdev=1617.49 00:16:24.868 lat (usec): min=3816, max=15228, avg=11928.68, stdev=1587.66 00:16:24.868 clat percentiles (usec): 00:16:24.868 | 1.00th=[ 7963], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[10290], 00:16:24.868 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12387], 00:16:24.868 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13566], 95.00th=[14091], 00:16:24.868 | 99.00th=[14877], 99.50th=[15008], 99.90th=[15008], 99.95th=[15008], 00:16:24.868 | 99.99th=[15270] 00:16:24.868 bw ( KiB/s): min=21456, max=22304, per=31.35%, avg=21880.00, stdev=599.63, samples=2 00:16:24.868 iops : min= 5364, max= 5576, avg=5470.00, stdev=149.91, samples=2 00:16:24.868 lat (usec) : 500=0.01% 00:16:24.868 lat (msec) : 4=0.07%, 10=14.55%, 20=85.37% 00:16:24.868 cpu : usr=4.69%, sys=13.47%, ctx=657, majf=0, minf=14 00:16:24.868 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:24.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:24.868 issued rwts: total=5120,5598,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.868 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:24.868 00:16:24.868 Run status group 0 (all jobs): 00:16:24.868 READ: bw=64.3MiB/s (67.4MB/s), 9978KiB/s-19.9MiB/s (10.2MB/s-20.9MB/s), io=64.5MiB (67.6MB), run=1003-1003msec 00:16:24.868 WRITE: bw=68.2MiB/s (71.5MB/s), 9.97MiB/s-21.8MiB/s (10.5MB/s-22.9MB/s), io=68.4MiB (71.7MB), run=1003-1003msec 00:16:24.868 00:16:24.868 Disk stats (read/write): 00:16:24.868 nvme0n1: ios=2098/2175, merge=0/0, ticks=16700/17013, in_queue=33713, util=87.88% 00:16:24.868 nvme0n2: ios=3370/3584, merge=0/0, ticks=24686/24092, in_queue=48778, util=88.56% 00:16:24.868 nvme0n3: ios=4275/4608, merge=0/0, ticks=12566/12823, in_queue=25389, util=89.03% 00:16:24.868 nvme0n4: ios=4580/4608, merge=0/0, ticks=16595/15750, in_queue=32345, util=89.78% 00:16:24.868 04:12:26 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:24.868 [global] 00:16:24.868 thread=1 00:16:24.868 invalidate=1 00:16:24.868 rw=randwrite 00:16:24.868 time_based=1 00:16:24.868 runtime=1 00:16:24.868 ioengine=libaio 00:16:24.868 direct=1 00:16:24.868 bs=4096 00:16:24.868 iodepth=128 00:16:24.868 norandommap=0 00:16:24.868 numjobs=1 00:16:24.868 00:16:24.868 verify_dump=1 00:16:24.868 verify_backlog=512 00:16:24.868 verify_state_save=0 00:16:24.868 do_verify=1 00:16:24.868 verify=crc32c-intel 00:16:24.868 [job0] 00:16:24.868 filename=/dev/nvme0n1 00:16:24.868 [job1] 00:16:24.868 filename=/dev/nvme0n2 00:16:24.868 [job2] 00:16:24.868 filename=/dev/nvme0n3 00:16:24.868 [job3] 00:16:24.868 filename=/dev/nvme0n4 00:16:24.868 Could not set queue depth (nvme0n1) 00:16:24.868 Could not set queue depth (nvme0n2) 00:16:24.868 Could not set queue depth (nvme0n3) 00:16:24.868 Could not set queue depth (nvme0n4) 00:16:24.868 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:24.868 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:24.868 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:24.868 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:24.868 fio-3.35 00:16:24.868 Starting 4 threads 00:16:26.248 00:16:26.248 job0: (groupid=0, jobs=1): err= 0: pid=87521: Tue Nov 26 04:12:27 2024 00:16:26.248 read: IOPS=1556, BW=6226KiB/s (6376kB/s)(6276KiB/1008msec) 00:16:26.248 slat (usec): min=3, max=10993, avg=237.84, stdev=1048.56 00:16:26.248 clat (usec): min=1687, max=57052, avg=27943.01, stdev=7367.95 00:16:26.248 lat (usec): min=8899, max=57065, avg=28180.85, stdev=7335.00 00:16:26.248 clat percentiles (usec): 00:16:26.248 | 1.00th=[16909], 5.00th=[19792], 10.00th=[21365], 20.00th=[22152], 00:16:26.248 | 30.00th=[22676], 40.00th=[22938], 50.00th=[25560], 60.00th=[29754], 00:16:26.248 | 70.00th=[31851], 80.00th=[34866], 90.00th=[37487], 95.00th=[41681], 00:16:26.248 | 99.00th=[46924], 99.50th=[56361], 99.90th=[56886], 99.95th=[56886], 00:16:26.248 | 99.99th=[56886] 00:16:26.248 write: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec); 0 zone resets 00:16:26.248 slat (usec): min=12, max=9258, avg=301.49, stdev=1074.21 00:16:26.248 clat (usec): min=14290, max=83272, avg=40543.13, stdev=20626.46 00:16:26.248 lat (usec): min=17231, max=83298, avg=40844.61, stdev=20759.94 00:16:26.248 clat percentiles (usec): 00:16:26.248 | 1.00th=[17171], 5.00th=[19530], 10.00th=[19792], 20.00th=[21365], 00:16:26.248 | 30.00th=[25560], 40.00th=[30016], 50.00th=[32900], 60.00th=[36439], 00:16:26.248 | 70.00th=[45351], 80.00th=[67634], 90.00th=[77071], 95.00th=[79168], 00:16:26.248 | 99.00th=[81265], 99.50th=[81265], 99.90th=[81265], 99.95th=[83362], 00:16:26.248 | 99.99th=[83362] 00:16:26.248 bw ( KiB/s): min= 7240, max= 8400, per=12.98%, avg=7820.00, stdev=820.24, samples=2 00:16:26.248 iops : min= 1810, max= 2100, avg=1955.00, stdev=205.06, samples=2 00:16:26.248 lat (msec) : 2=0.03%, 10=0.30%, 20=8.16%, 50=76.53%, 100=14.98% 00:16:26.248 cpu : usr=2.09%, sys=5.36%, ctx=502, majf=0, minf=11 00:16:26.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:16:26.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.248 issued rwts: total=1569,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.248 job1: (groupid=0, jobs=1): err= 0: pid=87522: Tue Nov 26 04:12:27 2024 00:16:26.248 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:16:26.248 slat (usec): min=10, max=11686, avg=103.93, stdev=571.06 00:16:26.248 clat (usec): min=7302, max=35654, avg=13279.76, stdev=4280.03 00:16:26.248 lat (usec): min=7325, max=35699, avg=13383.69, stdev=4335.32 00:16:26.248 clat percentiles (usec): 00:16:26.248 | 1.00th=[ 8848], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:16:26.248 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11338], 60.00th=[12387], 00:16:26.248 | 70.00th=[15533], 80.00th=[16581], 90.00th=[17957], 95.00th=[22676], 00:16:26.248 | 99.00th=[25560], 99.50th=[28705], 99.90th=[32900], 99.95th=[32900], 00:16:26.248 | 99.99th=[35914] 00:16:26.248 write: IOPS=4400, BW=17.2MiB/s (18.0MB/s)(17.3MiB/1007msec); 0 zone resets 00:16:26.248 slat (usec): min=13, max=6558, avg=121.25, stdev=543.29 00:16:26.248 clat (usec): min=4324, max=43767, avg=16446.70, stdev=8353.13 00:16:26.248 lat (usec): min=5564, max=43795, avg=16567.95, stdev=8418.82 00:16:26.248 clat percentiles (usec): 00:16:26.248 | 1.00th=[ 8356], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[ 9896], 00:16:26.248 | 30.00th=[10159], 40.00th=[10814], 50.00th=[14877], 60.00th=[15926], 00:16:26.248 | 70.00th=[17433], 80.00th=[20055], 90.00th=[29754], 95.00th=[34866], 00:16:26.248 | 99.00th=[41157], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:16:26.248 | 99.99th=[43779] 00:16:26.248 bw ( KiB/s): min=14292, max=20160, per=28.59%, avg=17226.00, stdev=4149.30, samples=2 00:16:26.248 iops : min= 3573, max= 5040, avg=4306.50, stdev=1037.33, samples=2 00:16:26.248 lat (msec) : 10=22.49%, 20=63.32%, 50=14.19% 00:16:26.248 cpu : usr=3.68%, sys=15.61%, ctx=442, majf=0, minf=9 00:16:26.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:26.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.248 issued rwts: total=4096,4431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.248 job2: (groupid=0, jobs=1): err= 0: pid=87523: Tue Nov 26 04:12:27 2024 00:16:26.248 read: IOPS=6270, BW=24.5MiB/s (25.7MB/s)(24.5MiB/1001msec) 00:16:26.248 slat (usec): min=7, max=2254, avg=73.63, stdev=309.55 00:16:26.248 clat (usec): min=301, max=12317, avg=9714.90, stdev=999.35 00:16:26.248 lat (usec): min=858, max=13793, avg=9788.53, stdev=962.61 00:16:26.248 clat percentiles (usec): 00:16:26.248 | 1.00th=[ 5473], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 9110], 00:16:26.248 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:16:26.248 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10552], 95.00th=[10814], 00:16:26.248 | 99.00th=[11469], 99.50th=[11600], 99.90th=[11994], 99.95th=[11994], 00:16:26.248 | 99.99th=[12256] 00:16:26.248 write: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec); 0 zone resets 00:16:26.248 slat (usec): min=11, max=2345, avg=74.21, stdev=270.59 00:16:26.248 clat (usec): min=7773, max=11898, avg=9860.12, stdev=840.35 00:16:26.248 lat (usec): min=7793, max=11914, avg=9934.33, stdev=824.93 00:16:26.248 clat percentiles (usec): 00:16:26.248 | 1.00th=[ 8029], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 8979], 00:16:26.248 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:16:26.248 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10814], 95.00th=[11076], 00:16:26.248 | 99.00th=[11600], 99.50th=[11731], 99.90th=[11863], 99.95th=[11863], 00:16:26.248 | 99.99th=[11863] 00:16:26.248 bw ( KiB/s): min=26008, max=26008, per=43.17%, avg=26008.00, stdev= 0.00, samples=1 00:16:26.248 iops : min= 6502, max= 6502, avg=6502.00, stdev= 0.00, samples=1 00:16:26.248 lat (usec) : 500=0.01%, 1000=0.03% 00:16:26.248 lat (msec) : 4=0.25%, 10=53.75%, 20=45.96% 00:16:26.248 cpu : usr=5.70%, sys=15.10%, ctx=1035, majf=0, minf=11 00:16:26.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:16:26.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.248 issued rwts: total=6277,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.248 job3: (groupid=0, jobs=1): err= 0: pid=87524: Tue Nov 26 04:12:27 2024 00:16:26.248 read: IOPS=1550, BW=6201KiB/s (6350kB/s)(6232KiB/1005msec) 00:16:26.248 slat (usec): min=3, max=7447, avg=221.87, stdev=938.86 00:16:26.248 clat (usec): min=1846, max=66390, avg=27968.41, stdev=8266.97 00:16:26.248 lat (usec): min=4725, max=67047, avg=28190.28, stdev=8259.97 00:16:26.248 clat percentiles (usec): 00:16:26.248 | 1.00th=[14091], 5.00th=[19792], 10.00th=[21627], 20.00th=[22676], 00:16:26.248 | 30.00th=[23200], 40.00th=[23462], 50.00th=[24773], 60.00th=[27395], 00:16:26.248 | 70.00th=[30540], 80.00th=[33817], 90.00th=[37487], 95.00th=[41157], 00:16:26.248 | 99.00th=[62129], 99.50th=[62653], 99.90th=[66323], 99.95th=[66323], 00:16:26.248 | 99.99th=[66323] 00:16:26.248 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:16:26.248 slat (usec): min=7, max=8252, avg=313.21, stdev=1033.69 00:16:26.248 clat (usec): min=14789, max=81584, avg=40194.88, stdev=20440.44 00:16:26.248 lat (usec): min=14840, max=81610, avg=40508.09, stdev=20569.77 00:16:26.248 clat percentiles (usec): 00:16:26.248 | 1.00th=[16909], 5.00th=[19530], 10.00th=[20055], 20.00th=[21103], 00:16:26.248 | 30.00th=[26084], 40.00th=[31065], 50.00th=[32900], 60.00th=[36439], 00:16:26.248 | 70.00th=[42730], 80.00th=[68682], 90.00th=[77071], 95.00th=[78119], 00:16:26.248 | 99.00th=[81265], 99.50th=[81265], 99.90th=[81265], 99.95th=[81265], 00:16:26.248 | 99.99th=[81265] 00:16:26.248 bw ( KiB/s): min= 7080, max= 8456, per=12.89%, avg=7768.00, stdev=972.98, samples=2 00:16:26.248 iops : min= 1770, max= 2114, avg=1942.00, stdev=243.24, samples=2 00:16:26.248 lat (msec) : 2=0.03%, 10=0.33%, 20=7.54%, 50=76.51%, 100=15.59% 00:16:26.248 cpu : usr=1.99%, sys=5.68%, ctx=568, majf=0, minf=19 00:16:26.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:16:26.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.249 issued rwts: total=1558,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.249 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.249 00:16:26.249 Run status group 0 (all jobs): 00:16:26.249 READ: bw=52.3MiB/s (54.9MB/s), 6201KiB/s-24.5MiB/s (6350kB/s-25.7MB/s), io=52.7MiB (55.3MB), run=1001-1008msec 00:16:26.249 WRITE: bw=58.8MiB/s (61.7MB/s), 8127KiB/s-26.0MiB/s (8322kB/s-27.2MB/s), io=59.3MiB (62.2MB), run=1001-1008msec 00:16:26.249 00:16:26.249 Disk stats (read/write): 00:16:26.249 nvme0n1: ios=1586/1820, merge=0/0, ticks=10375/15014, in_queue=25389, util=88.18% 00:16:26.249 nvme0n2: ios=3217/3584, merge=0/0, ticks=20970/28872, in_queue=49842, util=89.28% 00:16:26.249 nvme0n3: ios=5530/5632, merge=0/0, ticks=12708/11581, in_queue=24289, util=89.29% 00:16:26.249 nvme0n4: ios=1553/1801, merge=0/0, ticks=9654/15374, in_queue=25028, util=89.53% 00:16:26.249 04:12:27 -- target/fio.sh@55 -- # sync 00:16:26.249 04:12:27 -- target/fio.sh@59 -- # fio_pid=87537 00:16:26.249 04:12:27 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:26.249 04:12:27 -- target/fio.sh@61 -- # sleep 3 00:16:26.249 [global] 00:16:26.249 thread=1 00:16:26.249 invalidate=1 00:16:26.249 rw=read 00:16:26.249 time_based=1 00:16:26.249 runtime=10 00:16:26.249 ioengine=libaio 00:16:26.249 direct=1 00:16:26.249 bs=4096 00:16:26.249 iodepth=1 00:16:26.249 norandommap=1 00:16:26.249 numjobs=1 00:16:26.249 00:16:26.249 [job0] 00:16:26.249 filename=/dev/nvme0n1 00:16:26.249 [job1] 00:16:26.249 filename=/dev/nvme0n2 00:16:26.249 [job2] 00:16:26.249 filename=/dev/nvme0n3 00:16:26.249 [job3] 00:16:26.249 filename=/dev/nvme0n4 00:16:26.249 Could not set queue depth (nvme0n1) 00:16:26.249 Could not set queue depth (nvme0n2) 00:16:26.249 Could not set queue depth (nvme0n3) 00:16:26.249 Could not set queue depth (nvme0n4) 00:16:26.249 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:26.249 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:26.249 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:26.249 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:26.249 fio-3.35 00:16:26.249 Starting 4 threads 00:16:29.558 04:12:30 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:29.558 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=53047296, buflen=4096 00:16:29.558 fio: pid=87580, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:29.558 04:12:30 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:29.558 fio: pid=87579, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:29.558 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=56524800, buflen=4096 00:16:29.558 04:12:31 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:29.558 04:12:31 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:29.817 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=60588032, buflen=4096 00:16:29.818 fio: pid=87577, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:29.818 04:12:31 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:29.818 04:12:31 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:30.077 fio: pid=87578, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:30.077 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=66867200, buflen=4096 00:16:30.077 00:16:30.077 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87577: Tue Nov 26 04:12:31 2024 00:16:30.077 read: IOPS=4312, BW=16.8MiB/s (17.7MB/s)(57.8MiB/3430msec) 00:16:30.077 slat (usec): min=8, max=11817, avg=16.15, stdev=155.21 00:16:30.077 clat (usec): min=133, max=3626, avg=214.49, stdev=58.68 00:16:30.077 lat (usec): min=145, max=12116, avg=230.64, stdev=166.58 00:16:30.077 clat percentiles (usec): 00:16:30.077 | 1.00th=[ 159], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 190], 00:16:30.077 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 215], 00:16:30.077 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 247], 95.00th=[ 277], 00:16:30.077 | 99.00th=[ 343], 99.50th=[ 363], 99.90th=[ 545], 99.95th=[ 840], 00:16:30.077 | 99.99th=[ 3589] 00:16:30.077 bw ( KiB/s): min=17445, max=18088, per=28.13%, avg=17735.50, stdev=245.88, samples=6 00:16:30.077 iops : min= 4361, max= 4522, avg=4433.83, stdev=61.53, samples=6 00:16:30.077 lat (usec) : 250=91.10%, 500=8.79%, 750=0.04%, 1000=0.02% 00:16:30.077 lat (msec) : 2=0.02%, 4=0.02% 00:16:30.077 cpu : usr=1.31%, sys=4.64%, ctx=14798, majf=0, minf=1 00:16:30.077 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:30.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.077 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.077 issued rwts: total=14793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.077 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:30.077 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87578: Tue Nov 26 04:12:31 2024 00:16:30.077 read: IOPS=4447, BW=17.4MiB/s (18.2MB/s)(63.8MiB/3671msec) 00:16:30.077 slat (usec): min=10, max=12914, avg=17.00, stdev=201.16 00:16:30.077 clat (usec): min=73, max=3395, avg=206.55, stdev=52.53 00:16:30.077 lat (usec): min=139, max=13069, avg=223.55, stdev=208.61 00:16:30.077 clat percentiles (usec): 00:16:30.077 | 1.00th=[ 143], 5.00th=[ 157], 10.00th=[ 176], 20.00th=[ 186], 00:16:30.077 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 212], 00:16:30.077 | 70.00th=[ 219], 80.00th=[ 227], 90.00th=[ 237], 95.00th=[ 245], 00:16:30.077 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[ 586], 99.95th=[ 1029], 00:16:30.077 | 99.99th=[ 3163] 00:16:30.077 bw ( KiB/s): min=17208, max=18056, per=28.14%, avg=17745.00, stdev=326.60, samples=7 00:16:30.077 iops : min= 4302, max= 4514, avg=4436.14, stdev=81.59, samples=7 00:16:30.077 lat (usec) : 100=0.01%, 250=96.44%, 500=3.43%, 750=0.05%, 1000=0.02% 00:16:30.077 lat (msec) : 2=0.04%, 4=0.02% 00:16:30.077 cpu : usr=0.90%, sys=4.90%, ctx=16337, majf=0, minf=2 00:16:30.077 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:30.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.077 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.077 issued rwts: total=16326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.077 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:30.077 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87579: Tue Nov 26 04:12:31 2024 00:16:30.077 read: IOPS=4291, BW=16.8MiB/s (17.6MB/s)(53.9MiB/3216msec) 00:16:30.077 slat (usec): min=8, max=11357, avg=16.14, stdev=123.35 00:16:30.077 clat (usec): min=130, max=3645, avg=215.77, stdev=51.82 00:16:30.077 lat (usec): min=145, max=11692, avg=231.92, stdev=136.48 00:16:30.077 clat percentiles (usec): 00:16:30.077 | 1.00th=[ 157], 5.00th=[ 172], 10.00th=[ 180], 20.00th=[ 190], 00:16:30.077 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:16:30.077 | 70.00th=[ 225], 80.00th=[ 233], 90.00th=[ 251], 95.00th=[ 289], 00:16:30.077 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 461], 99.95th=[ 660], 00:16:30.077 | 99.99th=[ 2089] 00:16:30.077 bw ( KiB/s): min=17181, max=17968, per=27.86%, avg=17568.83, stdev=315.31, samples=6 00:16:30.077 iops : min= 4295, max= 4492, avg=4392.17, stdev=78.89, samples=6 00:16:30.077 lat (usec) : 250=90.01%, 500=9.90%, 750=0.06% 00:16:30.077 lat (msec) : 2=0.01%, 4=0.01% 00:16:30.077 cpu : usr=0.93%, sys=4.98%, ctx=13806, majf=0, minf=2 00:16:30.077 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:30.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.077 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.077 issued rwts: total=13801,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.077 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:30.077 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87580: Tue Nov 26 04:12:31 2024 00:16:30.077 read: IOPS=4406, BW=17.2MiB/s (18.0MB/s)(50.6MiB/2939msec) 00:16:30.077 slat (nsec): min=10593, max=71677, avg=13698.65, stdev=3937.93 00:16:30.077 clat (usec): min=126, max=2700, avg=211.81, stdev=41.08 00:16:30.077 lat (usec): min=137, max=2715, avg=225.51, stdev=41.39 00:16:30.077 clat percentiles (usec): 00:16:30.077 | 1.00th=[ 153], 5.00th=[ 169], 10.00th=[ 180], 20.00th=[ 190], 00:16:30.077 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:16:30.077 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 241], 95.00th=[ 253], 00:16:30.077 | 99.00th=[ 302], 99.50th=[ 318], 99.90th=[ 486], 99.95th=[ 619], 00:16:30.077 | 99.99th=[ 1647] 00:16:30.077 bw ( KiB/s): min=17288, max=17928, per=28.01%, avg=17660.80, stdev=237.70, samples=5 00:16:30.077 iops : min= 4322, max= 4482, avg=4415.20, stdev=59.42, samples=5 00:16:30.077 lat (usec) : 250=93.87%, 500=6.05%, 750=0.05%, 1000=0.01% 00:16:30.077 lat (msec) : 2=0.02%, 4=0.01% 00:16:30.077 cpu : usr=1.19%, sys=4.87%, ctx=12953, majf=0, minf=2 00:16:30.077 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:30.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.077 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.077 issued rwts: total=12952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.077 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:30.077 00:16:30.077 Run status group 0 (all jobs): 00:16:30.077 READ: bw=61.6MiB/s (64.6MB/s), 16.8MiB/s-17.4MiB/s (17.6MB/s-18.2MB/s), io=226MiB (237MB), run=2939-3671msec 00:16:30.077 00:16:30.077 Disk stats (read/write): 00:16:30.077 nvme0n1: ios=14517/0, merge=0/0, ticks=3177/0, in_queue=3177, util=95.42% 00:16:30.077 nvme0n2: ios=16028/0, merge=0/0, ticks=3406/0, in_queue=3406, util=95.32% 00:16:30.077 nvme0n3: ios=13506/0, merge=0/0, ticks=2947/0, in_queue=2947, util=96.27% 00:16:30.077 nvme0n4: ios=12639/0, merge=0/0, ticks=2721/0, in_queue=2721, util=96.76% 00:16:30.077 04:12:31 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:30.077 04:12:31 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:30.336 04:12:31 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:30.337 04:12:31 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:30.596 04:12:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:30.596 04:12:32 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:30.855 04:12:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:30.855 04:12:32 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:31.114 04:12:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:31.114 04:12:32 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:31.374 04:12:32 -- target/fio.sh@69 -- # fio_status=0 00:16:31.374 04:12:32 -- target/fio.sh@70 -- # wait 87537 00:16:31.374 04:12:32 -- target/fio.sh@70 -- # fio_status=4 00:16:31.374 04:12:32 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:31.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.374 04:12:32 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:31.374 04:12:32 -- common/autotest_common.sh@1208 -- # local i=0 00:16:31.374 04:12:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:31.374 04:12:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:31.374 04:12:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:31.374 04:12:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:31.374 nvmf hotplug test: fio failed as expected 00:16:31.374 04:12:33 -- common/autotest_common.sh@1220 -- # return 0 00:16:31.374 04:12:33 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:31.374 04:12:33 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:31.374 04:12:33 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.634 04:12:33 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:31.634 04:12:33 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:31.634 04:12:33 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:31.634 04:12:33 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:31.634 04:12:33 -- target/fio.sh@91 -- # nvmftestfini 00:16:31.634 04:12:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:31.634 04:12:33 -- nvmf/common.sh@116 -- # sync 00:16:31.634 04:12:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:31.634 04:12:33 -- nvmf/common.sh@119 -- # set +e 00:16:31.634 04:12:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:31.634 04:12:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:31.634 rmmod nvme_tcp 00:16:31.634 rmmod nvme_fabrics 00:16:31.634 rmmod nvme_keyring 00:16:31.634 04:12:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:31.634 04:12:33 -- nvmf/common.sh@123 -- # set -e 00:16:31.634 04:12:33 -- nvmf/common.sh@124 -- # return 0 00:16:31.634 04:12:33 -- nvmf/common.sh@477 -- # '[' -n 87046 ']' 00:16:31.634 04:12:33 -- nvmf/common.sh@478 -- # killprocess 87046 00:16:31.634 04:12:33 -- common/autotest_common.sh@936 -- # '[' -z 87046 ']' 00:16:31.634 04:12:33 -- common/autotest_common.sh@940 -- # kill -0 87046 00:16:31.634 04:12:33 -- common/autotest_common.sh@941 -- # uname 00:16:31.634 04:12:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:31.634 04:12:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87046 00:16:31.634 killing process with pid 87046 00:16:31.634 04:12:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:31.634 04:12:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:31.634 04:12:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87046' 00:16:31.634 04:12:33 -- common/autotest_common.sh@955 -- # kill 87046 00:16:31.634 04:12:33 -- common/autotest_common.sh@960 -- # wait 87046 00:16:31.893 04:12:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:31.893 04:12:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:31.893 04:12:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:31.893 04:12:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:31.893 04:12:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:31.893 04:12:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.893 04:12:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.893 04:12:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.893 04:12:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:31.893 00:16:31.893 real 0m19.473s 00:16:31.893 user 1m13.541s 00:16:31.893 sys 0m8.932s 00:16:31.893 04:12:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:31.893 04:12:33 -- common/autotest_common.sh@10 -- # set +x 00:16:31.893 ************************************ 00:16:31.893 END TEST nvmf_fio_target 00:16:31.893 ************************************ 00:16:32.151 04:12:33 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:32.151 04:12:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:32.151 04:12:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:32.151 04:12:33 -- common/autotest_common.sh@10 -- # set +x 00:16:32.151 ************************************ 00:16:32.151 START TEST nvmf_bdevio 00:16:32.151 ************************************ 00:16:32.151 04:12:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:32.151 * Looking for test storage... 00:16:32.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:32.151 04:12:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:32.151 04:12:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:32.152 04:12:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:32.152 04:12:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:32.152 04:12:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:32.152 04:12:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:32.152 04:12:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:32.152 04:12:33 -- scripts/common.sh@335 -- # IFS=.-: 00:16:32.152 04:12:33 -- scripts/common.sh@335 -- # read -ra ver1 00:16:32.152 04:12:33 -- scripts/common.sh@336 -- # IFS=.-: 00:16:32.152 04:12:33 -- scripts/common.sh@336 -- # read -ra ver2 00:16:32.152 04:12:33 -- scripts/common.sh@337 -- # local 'op=<' 00:16:32.152 04:12:33 -- scripts/common.sh@339 -- # ver1_l=2 00:16:32.152 04:12:33 -- scripts/common.sh@340 -- # ver2_l=1 00:16:32.152 04:12:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:32.152 04:12:33 -- scripts/common.sh@343 -- # case "$op" in 00:16:32.152 04:12:33 -- scripts/common.sh@344 -- # : 1 00:16:32.152 04:12:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:32.152 04:12:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:32.152 04:12:33 -- scripts/common.sh@364 -- # decimal 1 00:16:32.152 04:12:33 -- scripts/common.sh@352 -- # local d=1 00:16:32.152 04:12:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:32.152 04:12:33 -- scripts/common.sh@354 -- # echo 1 00:16:32.152 04:12:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:32.152 04:12:33 -- scripts/common.sh@365 -- # decimal 2 00:16:32.152 04:12:33 -- scripts/common.sh@352 -- # local d=2 00:16:32.152 04:12:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:32.152 04:12:33 -- scripts/common.sh@354 -- # echo 2 00:16:32.152 04:12:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:32.152 04:12:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:32.152 04:12:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:32.152 04:12:33 -- scripts/common.sh@367 -- # return 0 00:16:32.152 04:12:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:32.152 04:12:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:32.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.152 --rc genhtml_branch_coverage=1 00:16:32.152 --rc genhtml_function_coverage=1 00:16:32.152 --rc genhtml_legend=1 00:16:32.152 --rc geninfo_all_blocks=1 00:16:32.152 --rc geninfo_unexecuted_blocks=1 00:16:32.152 00:16:32.152 ' 00:16:32.152 04:12:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:32.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.152 --rc genhtml_branch_coverage=1 00:16:32.152 --rc genhtml_function_coverage=1 00:16:32.152 --rc genhtml_legend=1 00:16:32.152 --rc geninfo_all_blocks=1 00:16:32.152 --rc geninfo_unexecuted_blocks=1 00:16:32.152 00:16:32.152 ' 00:16:32.152 04:12:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:32.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.152 --rc genhtml_branch_coverage=1 00:16:32.152 --rc genhtml_function_coverage=1 00:16:32.152 --rc genhtml_legend=1 00:16:32.152 --rc geninfo_all_blocks=1 00:16:32.152 --rc geninfo_unexecuted_blocks=1 00:16:32.152 00:16:32.152 ' 00:16:32.152 04:12:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:32.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.152 --rc genhtml_branch_coverage=1 00:16:32.152 --rc genhtml_function_coverage=1 00:16:32.152 --rc genhtml_legend=1 00:16:32.152 --rc geninfo_all_blocks=1 00:16:32.152 --rc geninfo_unexecuted_blocks=1 00:16:32.152 00:16:32.152 ' 00:16:32.152 04:12:33 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:32.152 04:12:33 -- nvmf/common.sh@7 -- # uname -s 00:16:32.152 04:12:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:32.152 04:12:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:32.152 04:12:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:32.152 04:12:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:32.152 04:12:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:32.152 04:12:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:32.152 04:12:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:32.152 04:12:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:32.152 04:12:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:32.152 04:12:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:32.152 04:12:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:16:32.152 04:12:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:16:32.152 04:12:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:32.152 04:12:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:32.152 04:12:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:32.152 04:12:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:32.411 04:12:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:32.411 04:12:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:32.411 04:12:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:32.411 04:12:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.411 04:12:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.411 04:12:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.411 04:12:33 -- paths/export.sh@5 -- # export PATH 00:16:32.411 04:12:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.411 04:12:33 -- nvmf/common.sh@46 -- # : 0 00:16:32.411 04:12:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:32.411 04:12:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:32.411 04:12:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:32.411 04:12:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:32.411 04:12:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:32.412 04:12:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:32.412 04:12:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:32.412 04:12:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:32.412 04:12:33 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:32.412 04:12:33 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:32.412 04:12:33 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:32.412 04:12:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:32.412 04:12:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:32.412 04:12:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:32.412 04:12:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:32.412 04:12:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:32.412 04:12:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.412 04:12:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:32.412 04:12:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.412 04:12:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:32.412 04:12:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:32.412 04:12:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:32.412 04:12:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:32.412 04:12:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:32.412 04:12:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:32.412 04:12:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:32.412 04:12:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:32.412 04:12:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:32.412 04:12:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:32.412 04:12:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:32.412 04:12:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:32.412 04:12:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:32.412 04:12:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:32.412 04:12:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:32.412 04:12:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:32.412 04:12:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:32.412 04:12:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:32.412 04:12:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:32.412 04:12:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:32.412 Cannot find device "nvmf_tgt_br" 00:16:32.412 04:12:33 -- nvmf/common.sh@154 -- # true 00:16:32.412 04:12:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:32.412 Cannot find device "nvmf_tgt_br2" 00:16:32.412 04:12:33 -- nvmf/common.sh@155 -- # true 00:16:32.412 04:12:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:32.412 04:12:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:32.412 Cannot find device "nvmf_tgt_br" 00:16:32.412 04:12:33 -- nvmf/common.sh@157 -- # true 00:16:32.412 04:12:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:32.412 Cannot find device "nvmf_tgt_br2" 00:16:32.412 04:12:34 -- nvmf/common.sh@158 -- # true 00:16:32.412 04:12:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:32.412 04:12:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:32.412 04:12:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:32.412 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:32.412 04:12:34 -- nvmf/common.sh@161 -- # true 00:16:32.412 04:12:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:32.412 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:32.412 04:12:34 -- nvmf/common.sh@162 -- # true 00:16:32.412 04:12:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:32.412 04:12:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:32.412 04:12:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:32.412 04:12:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:32.412 04:12:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:32.412 04:12:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:32.412 04:12:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:32.412 04:12:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:32.412 04:12:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:32.412 04:12:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:32.412 04:12:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:32.412 04:12:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:32.412 04:12:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:32.412 04:12:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:32.671 04:12:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:32.671 04:12:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:32.671 04:12:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:32.671 04:12:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:32.671 04:12:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:32.671 04:12:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:32.671 04:12:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:32.671 04:12:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:32.671 04:12:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:32.671 04:12:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:32.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:32.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:16:32.671 00:16:32.671 --- 10.0.0.2 ping statistics --- 00:16:32.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.671 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:16:32.671 04:12:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:32.671 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:32.671 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:16:32.671 00:16:32.671 --- 10.0.0.3 ping statistics --- 00:16:32.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.671 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:16:32.671 04:12:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:32.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:32.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:16:32.671 00:16:32.671 --- 10.0.0.1 ping statistics --- 00:16:32.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.671 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:32.671 04:12:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:32.671 04:12:34 -- nvmf/common.sh@421 -- # return 0 00:16:32.671 04:12:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:32.671 04:12:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:32.671 04:12:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:32.671 04:12:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:32.671 04:12:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:32.671 04:12:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:32.671 04:12:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:32.671 04:12:34 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:32.671 04:12:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:32.671 04:12:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:32.671 04:12:34 -- common/autotest_common.sh@10 -- # set +x 00:16:32.671 04:12:34 -- nvmf/common.sh@469 -- # nvmfpid=87917 00:16:32.671 04:12:34 -- nvmf/common.sh@470 -- # waitforlisten 87917 00:16:32.671 04:12:34 -- common/autotest_common.sh@829 -- # '[' -z 87917 ']' 00:16:32.671 04:12:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:32.671 04:12:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.671 04:12:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:32.671 04:12:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.671 04:12:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:32.671 04:12:34 -- common/autotest_common.sh@10 -- # set +x 00:16:32.671 [2024-11-26 04:12:34.357582] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:32.671 [2024-11-26 04:12:34.357668] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:32.931 [2024-11-26 04:12:34.496921] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:32.931 [2024-11-26 04:12:34.574879] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:32.931 [2024-11-26 04:12:34.575016] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:32.931 [2024-11-26 04:12:34.575028] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:32.931 [2024-11-26 04:12:34.575036] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:32.931 [2024-11-26 04:12:34.575183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:32.931 [2024-11-26 04:12:34.575606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:32.931 [2024-11-26 04:12:34.576452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:32.931 [2024-11-26 04:12:34.576456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:33.867 04:12:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:33.867 04:12:35 -- common/autotest_common.sh@862 -- # return 0 00:16:33.867 04:12:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:33.867 04:12:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:33.867 04:12:35 -- common/autotest_common.sh@10 -- # set +x 00:16:33.867 04:12:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:33.867 04:12:35 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:33.867 04:12:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.867 04:12:35 -- common/autotest_common.sh@10 -- # set +x 00:16:33.867 [2024-11-26 04:12:35.438689] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:33.867 04:12:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.867 04:12:35 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:33.867 04:12:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.867 04:12:35 -- common/autotest_common.sh@10 -- # set +x 00:16:33.867 Malloc0 00:16:33.867 04:12:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.867 04:12:35 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:33.867 04:12:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.867 04:12:35 -- common/autotest_common.sh@10 -- # set +x 00:16:33.867 04:12:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.867 04:12:35 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:33.867 04:12:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.867 04:12:35 -- common/autotest_common.sh@10 -- # set +x 00:16:33.867 04:12:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.867 04:12:35 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:33.868 04:12:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.868 04:12:35 -- common/autotest_common.sh@10 -- # set +x 00:16:33.868 [2024-11-26 04:12:35.516217] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.868 04:12:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.868 04:12:35 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:33.868 04:12:35 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:33.868 04:12:35 -- nvmf/common.sh@520 -- # config=() 00:16:33.868 04:12:35 -- nvmf/common.sh@520 -- # local subsystem config 00:16:33.868 04:12:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:33.868 04:12:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:33.868 { 00:16:33.868 "params": { 00:16:33.868 "name": "Nvme$subsystem", 00:16:33.868 "trtype": "$TEST_TRANSPORT", 00:16:33.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:33.868 "adrfam": "ipv4", 00:16:33.868 "trsvcid": "$NVMF_PORT", 00:16:33.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:33.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:33.868 "hdgst": ${hdgst:-false}, 00:16:33.868 "ddgst": ${ddgst:-false} 00:16:33.868 }, 00:16:33.868 "method": "bdev_nvme_attach_controller" 00:16:33.868 } 00:16:33.868 EOF 00:16:33.868 )") 00:16:33.868 04:12:35 -- nvmf/common.sh@542 -- # cat 00:16:33.868 04:12:35 -- nvmf/common.sh@544 -- # jq . 00:16:33.868 04:12:35 -- nvmf/common.sh@545 -- # IFS=, 00:16:33.868 04:12:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:33.868 "params": { 00:16:33.868 "name": "Nvme1", 00:16:33.868 "trtype": "tcp", 00:16:33.868 "traddr": "10.0.0.2", 00:16:33.868 "adrfam": "ipv4", 00:16:33.868 "trsvcid": "4420", 00:16:33.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:33.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:33.868 "hdgst": false, 00:16:33.868 "ddgst": false 00:16:33.868 }, 00:16:33.868 "method": "bdev_nvme_attach_controller" 00:16:33.868 }' 00:16:33.868 [2024-11-26 04:12:35.576576] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:33.868 [2024-11-26 04:12:35.576668] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87977 ] 00:16:34.127 [2024-11-26 04:12:35.719943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:34.127 [2024-11-26 04:12:35.800494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.127 [2024-11-26 04:12:35.800631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.127 [2024-11-26 04:12:35.800983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.386 [2024-11-26 04:12:36.004006] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:34.386 [2024-11-26 04:12:36.004353] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:34.386 I/O targets: 00:16:34.386 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:34.386 00:16:34.386 00:16:34.386 CUnit - A unit testing framework for C - Version 2.1-3 00:16:34.386 http://cunit.sourceforge.net/ 00:16:34.386 00:16:34.386 00:16:34.386 Suite: bdevio tests on: Nvme1n1 00:16:34.386 Test: blockdev write read block ...passed 00:16:34.386 Test: blockdev write zeroes read block ...passed 00:16:34.386 Test: blockdev write zeroes read no split ...passed 00:16:34.386 Test: blockdev write zeroes read split ...passed 00:16:34.386 Test: blockdev write zeroes read split partial ...passed 00:16:34.386 Test: blockdev reset ...[2024-11-26 04:12:36.119191] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:34.386 [2024-11-26 04:12:36.119397] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1939ed0 (9): Bad file descriptor 00:16:34.386 [2024-11-26 04:12:36.132615] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:34.386 passed 00:16:34.386 Test: blockdev write read 8 blocks ...passed 00:16:34.386 Test: blockdev write read size > 128k ...passed 00:16:34.386 Test: blockdev write read invalid size ...passed 00:16:34.646 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:34.646 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:34.646 Test: blockdev write read max offset ...passed 00:16:34.646 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:34.646 Test: blockdev writev readv 8 blocks ...passed 00:16:34.646 Test: blockdev writev readv 30 x 1block ...passed 00:16:34.646 Test: blockdev writev readv block ...passed 00:16:34.646 Test: blockdev writev readv size > 128k ...passed 00:16:34.646 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:34.646 Test: blockdev comparev and writev ...[2024-11-26 04:12:36.304351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:34.646 [2024-11-26 04:12:36.304387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:34.646 [2024-11-26 04:12:36.304416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:34.646 [2024-11-26 04:12:36.304425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:34.646 [2024-11-26 04:12:36.304813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:34.646 [2024-11-26 04:12:36.304835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:34.646 [2024-11-26 04:12:36.304851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:34.646 [2024-11-26 04:12:36.304861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:34.646 [2024-11-26 04:12:36.305285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:34.646 [2024-11-26 04:12:36.305306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:34.646 [2024-11-26 04:12:36.305322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:34.646 [2024-11-26 04:12:36.305332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:34.646 [2024-11-26 04:12:36.305803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:34.646 [2024-11-26 04:12:36.305843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:34.646 [2024-11-26 04:12:36.305859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:34.646 [2024-11-26 04:12:36.305869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:34.646 passed 00:16:34.646 Test: blockdev nvme passthru rw ...passed 00:16:34.646 Test: blockdev nvme passthru vendor specific ...[2024-11-26 04:12:36.389057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:34.646 [2024-11-26 04:12:36.389093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:34.646 passed 00:16:34.646 Test: blockdev nvme admin passthru ...[2024-11-26 04:12:36.389212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:34.646 [2024-11-26 04:12:36.389233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:34.646 [2024-11-26 04:12:36.389359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:34.646 [2024-11-26 04:12:36.389374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:34.646 [2024-11-26 04:12:36.389503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:34.646 [2024-11-26 04:12:36.389518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:34.646 passed 00:16:34.905 Test: blockdev copy ...passed 00:16:34.905 00:16:34.905 Run Summary: Type Total Ran Passed Failed Inactive 00:16:34.905 suites 1 1 n/a 0 0 00:16:34.905 tests 23 23 23 0 0 00:16:34.905 asserts 152 152 152 0 n/a 00:16:34.905 00:16:34.905 Elapsed time = 0.872 seconds 00:16:34.905 04:12:36 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.905 04:12:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.905 04:12:36 -- common/autotest_common.sh@10 -- # set +x 00:16:35.165 04:12:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.165 04:12:36 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:35.165 04:12:36 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:35.165 04:12:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:35.165 04:12:36 -- nvmf/common.sh@116 -- # sync 00:16:35.165 04:12:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:35.165 04:12:36 -- nvmf/common.sh@119 -- # set +e 00:16:35.165 04:12:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:35.165 04:12:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:35.165 rmmod nvme_tcp 00:16:35.165 rmmod nvme_fabrics 00:16:35.165 rmmod nvme_keyring 00:16:35.165 04:12:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:35.165 04:12:36 -- nvmf/common.sh@123 -- # set -e 00:16:35.165 04:12:36 -- nvmf/common.sh@124 -- # return 0 00:16:35.165 04:12:36 -- nvmf/common.sh@477 -- # '[' -n 87917 ']' 00:16:35.165 04:12:36 -- nvmf/common.sh@478 -- # killprocess 87917 00:16:35.165 04:12:36 -- common/autotest_common.sh@936 -- # '[' -z 87917 ']' 00:16:35.165 04:12:36 -- common/autotest_common.sh@940 -- # kill -0 87917 00:16:35.165 04:12:36 -- common/autotest_common.sh@941 -- # uname 00:16:35.165 04:12:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:35.165 04:12:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87917 00:16:35.165 04:12:36 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:35.165 04:12:36 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:35.165 killing process with pid 87917 00:16:35.165 04:12:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87917' 00:16:35.165 04:12:36 -- common/autotest_common.sh@955 -- # kill 87917 00:16:35.165 04:12:36 -- common/autotest_common.sh@960 -- # wait 87917 00:16:35.424 04:12:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:35.424 04:12:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:35.424 04:12:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:35.424 04:12:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:35.424 04:12:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:35.424 04:12:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.424 04:12:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:35.424 04:12:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.424 04:12:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:35.424 00:16:35.424 real 0m3.442s 00:16:35.424 user 0m12.299s 00:16:35.424 sys 0m0.958s 00:16:35.424 04:12:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:35.424 ************************************ 00:16:35.424 END TEST nvmf_bdevio 00:16:35.424 ************************************ 00:16:35.424 04:12:37 -- common/autotest_common.sh@10 -- # set +x 00:16:35.684 04:12:37 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:35.684 04:12:37 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:35.684 04:12:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:35.684 04:12:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:35.684 04:12:37 -- common/autotest_common.sh@10 -- # set +x 00:16:35.684 ************************************ 00:16:35.684 START TEST nvmf_bdevio_no_huge 00:16:35.684 ************************************ 00:16:35.684 04:12:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:35.684 * Looking for test storage... 00:16:35.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:35.684 04:12:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:35.684 04:12:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:35.684 04:12:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:35.684 04:12:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:35.684 04:12:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:35.684 04:12:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:35.684 04:12:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:35.684 04:12:37 -- scripts/common.sh@335 -- # IFS=.-: 00:16:35.684 04:12:37 -- scripts/common.sh@335 -- # read -ra ver1 00:16:35.684 04:12:37 -- scripts/common.sh@336 -- # IFS=.-: 00:16:35.684 04:12:37 -- scripts/common.sh@336 -- # read -ra ver2 00:16:35.684 04:12:37 -- scripts/common.sh@337 -- # local 'op=<' 00:16:35.684 04:12:37 -- scripts/common.sh@339 -- # ver1_l=2 00:16:35.684 04:12:37 -- scripts/common.sh@340 -- # ver2_l=1 00:16:35.684 04:12:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:35.684 04:12:37 -- scripts/common.sh@343 -- # case "$op" in 00:16:35.684 04:12:37 -- scripts/common.sh@344 -- # : 1 00:16:35.684 04:12:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:35.684 04:12:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:35.684 04:12:37 -- scripts/common.sh@364 -- # decimal 1 00:16:35.684 04:12:37 -- scripts/common.sh@352 -- # local d=1 00:16:35.684 04:12:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:35.684 04:12:37 -- scripts/common.sh@354 -- # echo 1 00:16:35.684 04:12:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:35.684 04:12:37 -- scripts/common.sh@365 -- # decimal 2 00:16:35.684 04:12:37 -- scripts/common.sh@352 -- # local d=2 00:16:35.684 04:12:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:35.684 04:12:37 -- scripts/common.sh@354 -- # echo 2 00:16:35.684 04:12:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:35.684 04:12:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:35.684 04:12:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:35.684 04:12:37 -- scripts/common.sh@367 -- # return 0 00:16:35.684 04:12:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:35.684 04:12:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:35.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.684 --rc genhtml_branch_coverage=1 00:16:35.684 --rc genhtml_function_coverage=1 00:16:35.684 --rc genhtml_legend=1 00:16:35.684 --rc geninfo_all_blocks=1 00:16:35.684 --rc geninfo_unexecuted_blocks=1 00:16:35.684 00:16:35.684 ' 00:16:35.684 04:12:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:35.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.684 --rc genhtml_branch_coverage=1 00:16:35.684 --rc genhtml_function_coverage=1 00:16:35.684 --rc genhtml_legend=1 00:16:35.684 --rc geninfo_all_blocks=1 00:16:35.684 --rc geninfo_unexecuted_blocks=1 00:16:35.684 00:16:35.684 ' 00:16:35.684 04:12:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:35.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.685 --rc genhtml_branch_coverage=1 00:16:35.685 --rc genhtml_function_coverage=1 00:16:35.685 --rc genhtml_legend=1 00:16:35.685 --rc geninfo_all_blocks=1 00:16:35.685 --rc geninfo_unexecuted_blocks=1 00:16:35.685 00:16:35.685 ' 00:16:35.685 04:12:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:35.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.685 --rc genhtml_branch_coverage=1 00:16:35.685 --rc genhtml_function_coverage=1 00:16:35.685 --rc genhtml_legend=1 00:16:35.685 --rc geninfo_all_blocks=1 00:16:35.685 --rc geninfo_unexecuted_blocks=1 00:16:35.685 00:16:35.685 ' 00:16:35.685 04:12:37 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:35.685 04:12:37 -- nvmf/common.sh@7 -- # uname -s 00:16:35.685 04:12:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.685 04:12:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.685 04:12:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.685 04:12:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.685 04:12:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.685 04:12:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.685 04:12:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.685 04:12:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.685 04:12:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.685 04:12:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.685 04:12:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:16:35.685 04:12:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:16:35.685 04:12:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.685 04:12:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.685 04:12:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:35.685 04:12:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:35.685 04:12:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.685 04:12:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.685 04:12:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.685 04:12:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.685 04:12:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.685 04:12:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.685 04:12:37 -- paths/export.sh@5 -- # export PATH 00:16:35.685 04:12:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.685 04:12:37 -- nvmf/common.sh@46 -- # : 0 00:16:35.685 04:12:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:35.685 04:12:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:35.685 04:12:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:35.685 04:12:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.685 04:12:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.685 04:12:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:35.685 04:12:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:35.685 04:12:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:35.685 04:12:37 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:35.685 04:12:37 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:35.685 04:12:37 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:35.685 04:12:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:35.685 04:12:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.685 04:12:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:35.685 04:12:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:35.685 04:12:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:35.685 04:12:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.685 04:12:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:35.685 04:12:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.685 04:12:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:35.685 04:12:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:35.685 04:12:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:35.685 04:12:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:35.685 04:12:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:35.685 04:12:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:35.685 04:12:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.685 04:12:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:35.685 04:12:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:35.685 04:12:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:35.685 04:12:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:35.685 04:12:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:35.685 04:12:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:35.685 04:12:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.685 04:12:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:35.685 04:12:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:35.685 04:12:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:35.685 04:12:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:35.685 04:12:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:35.944 04:12:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:35.944 Cannot find device "nvmf_tgt_br" 00:16:35.944 04:12:37 -- nvmf/common.sh@154 -- # true 00:16:35.944 04:12:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:35.944 Cannot find device "nvmf_tgt_br2" 00:16:35.944 04:12:37 -- nvmf/common.sh@155 -- # true 00:16:35.944 04:12:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:35.944 04:12:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:35.944 Cannot find device "nvmf_tgt_br" 00:16:35.944 04:12:37 -- nvmf/common.sh@157 -- # true 00:16:35.944 04:12:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:35.944 Cannot find device "nvmf_tgt_br2" 00:16:35.944 04:12:37 -- nvmf/common.sh@158 -- # true 00:16:35.944 04:12:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:35.944 04:12:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:35.944 04:12:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:35.944 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.944 04:12:37 -- nvmf/common.sh@161 -- # true 00:16:35.944 04:12:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:35.944 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.944 04:12:37 -- nvmf/common.sh@162 -- # true 00:16:35.944 04:12:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:35.944 04:12:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:35.944 04:12:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:35.944 04:12:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:35.944 04:12:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:35.944 04:12:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:35.944 04:12:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:35.944 04:12:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:35.944 04:12:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:35.944 04:12:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:35.944 04:12:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:35.944 04:12:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:35.944 04:12:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:35.944 04:12:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:35.944 04:12:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:35.944 04:12:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:35.944 04:12:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:35.944 04:12:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:35.944 04:12:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:35.944 04:12:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:36.203 04:12:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:36.203 04:12:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:36.203 04:12:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:36.203 04:12:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:36.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:16:36.203 00:16:36.203 --- 10.0.0.2 ping statistics --- 00:16:36.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.203 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:16:36.203 04:12:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:36.203 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:36.203 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:16:36.203 00:16:36.203 --- 10.0.0.3 ping statistics --- 00:16:36.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.203 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:36.203 04:12:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:36.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:36.203 00:16:36.203 --- 10.0.0.1 ping statistics --- 00:16:36.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.203 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:36.203 04:12:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.203 04:12:37 -- nvmf/common.sh@421 -- # return 0 00:16:36.203 04:12:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:36.203 04:12:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.203 04:12:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:36.203 04:12:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:36.203 04:12:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.203 04:12:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:36.203 04:12:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:36.203 04:12:37 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:36.203 04:12:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:36.203 04:12:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:36.203 04:12:37 -- common/autotest_common.sh@10 -- # set +x 00:16:36.203 04:12:37 -- nvmf/common.sh@469 -- # nvmfpid=88168 00:16:36.203 04:12:37 -- nvmf/common.sh@470 -- # waitforlisten 88168 00:16:36.203 04:12:37 -- common/autotest_common.sh@829 -- # '[' -z 88168 ']' 00:16:36.203 04:12:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.203 04:12:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:36.203 04:12:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.203 04:12:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:36.203 04:12:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:36.203 04:12:37 -- common/autotest_common.sh@10 -- # set +x 00:16:36.203 [2024-11-26 04:12:37.861881] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:36.203 [2024-11-26 04:12:37.862043] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:36.462 [2024-11-26 04:12:38.011618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:36.462 [2024-11-26 04:12:38.137879] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:36.462 [2024-11-26 04:12:38.138103] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.462 [2024-11-26 04:12:38.138123] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.462 [2024-11-26 04:12:38.138134] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.462 [2024-11-26 04:12:38.138281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:36.462 [2024-11-26 04:12:38.138863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:36.462 [2024-11-26 04:12:38.138953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:36.462 [2024-11-26 04:12:38.138956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:37.399 04:12:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:37.399 04:12:38 -- common/autotest_common.sh@862 -- # return 0 00:16:37.399 04:12:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:37.399 04:12:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:37.399 04:12:38 -- common/autotest_common.sh@10 -- # set +x 00:16:37.399 04:12:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.399 04:12:38 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:37.399 04:12:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.399 04:12:38 -- common/autotest_common.sh@10 -- # set +x 00:16:37.399 [2024-11-26 04:12:38.936205] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:37.399 04:12:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.399 04:12:38 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:37.399 04:12:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.399 04:12:38 -- common/autotest_common.sh@10 -- # set +x 00:16:37.399 Malloc0 00:16:37.399 04:12:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.399 04:12:38 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:37.399 04:12:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.399 04:12:38 -- common/autotest_common.sh@10 -- # set +x 00:16:37.399 04:12:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.399 04:12:38 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:37.399 04:12:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.399 04:12:38 -- common/autotest_common.sh@10 -- # set +x 00:16:37.399 04:12:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.399 04:12:38 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:37.399 04:12:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.399 04:12:38 -- common/autotest_common.sh@10 -- # set +x 00:16:37.399 [2024-11-26 04:12:38.975148] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:37.399 04:12:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.399 04:12:38 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:37.399 04:12:38 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:37.399 04:12:38 -- nvmf/common.sh@520 -- # config=() 00:16:37.399 04:12:38 -- nvmf/common.sh@520 -- # local subsystem config 00:16:37.399 04:12:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:37.399 04:12:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:37.399 { 00:16:37.399 "params": { 00:16:37.399 "name": "Nvme$subsystem", 00:16:37.399 "trtype": "$TEST_TRANSPORT", 00:16:37.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:37.399 "adrfam": "ipv4", 00:16:37.399 "trsvcid": "$NVMF_PORT", 00:16:37.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:37.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:37.399 "hdgst": ${hdgst:-false}, 00:16:37.399 "ddgst": ${ddgst:-false} 00:16:37.399 }, 00:16:37.399 "method": "bdev_nvme_attach_controller" 00:16:37.399 } 00:16:37.399 EOF 00:16:37.399 )") 00:16:37.399 04:12:38 -- nvmf/common.sh@542 -- # cat 00:16:37.399 04:12:38 -- nvmf/common.sh@544 -- # jq . 00:16:37.399 04:12:38 -- nvmf/common.sh@545 -- # IFS=, 00:16:37.399 04:12:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:37.399 "params": { 00:16:37.399 "name": "Nvme1", 00:16:37.399 "trtype": "tcp", 00:16:37.399 "traddr": "10.0.0.2", 00:16:37.399 "adrfam": "ipv4", 00:16:37.399 "trsvcid": "4420", 00:16:37.399 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:37.399 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:37.399 "hdgst": false, 00:16:37.399 "ddgst": false 00:16:37.399 }, 00:16:37.399 "method": "bdev_nvme_attach_controller" 00:16:37.399 }' 00:16:37.399 [2024-11-26 04:12:39.032435] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:37.399 [2024-11-26 04:12:39.032540] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid88222 ] 00:16:37.657 [2024-11-26 04:12:39.174846] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:37.657 [2024-11-26 04:12:39.312251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.657 [2024-11-26 04:12:39.312405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.657 [2024-11-26 04:12:39.312405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:37.915 [2024-11-26 04:12:39.518830] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:37.915 [2024-11-26 04:12:39.518864] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:37.915 I/O targets: 00:16:37.915 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:37.915 00:16:37.915 00:16:37.915 CUnit - A unit testing framework for C - Version 2.1-3 00:16:37.915 http://cunit.sourceforge.net/ 00:16:37.915 00:16:37.915 00:16:37.915 Suite: bdevio tests on: Nvme1n1 00:16:37.915 Test: blockdev write read block ...passed 00:16:37.915 Test: blockdev write zeroes read block ...passed 00:16:37.915 Test: blockdev write zeroes read no split ...passed 00:16:37.915 Test: blockdev write zeroes read split ...passed 00:16:37.915 Test: blockdev write zeroes read split partial ...passed 00:16:37.915 Test: blockdev reset ...[2024-11-26 04:12:39.647151] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:37.915 [2024-11-26 04:12:39.647249] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c98820 (9): Bad file descriptor 00:16:37.916 [2024-11-26 04:12:39.667593] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:37.916 passed 00:16:37.916 Test: blockdev write read 8 blocks ...passed 00:16:37.916 Test: blockdev write read size > 128k ...passed 00:16:37.916 Test: blockdev write read invalid size ...passed 00:16:38.175 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:38.175 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:38.175 Test: blockdev write read max offset ...passed 00:16:38.175 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:38.175 Test: blockdev writev readv 8 blocks ...passed 00:16:38.175 Test: blockdev writev readv 30 x 1block ...passed 00:16:38.175 Test: blockdev writev readv block ...passed 00:16:38.175 Test: blockdev writev readv size > 128k ...passed 00:16:38.175 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:38.175 Test: blockdev comparev and writev ...[2024-11-26 04:12:39.845739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:38.175 [2024-11-26 04:12:39.846042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.175 [2024-11-26 04:12:39.846203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:38.175 [2024-11-26 04:12:39.846361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.175 [2024-11-26 04:12:39.846904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:38.175 [2024-11-26 04:12:39.847051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:38.175 [2024-11-26 04:12:39.847224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:38.175 [2024-11-26 04:12:39.847382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:38.175 [2024-11-26 04:12:39.847835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:38.175 [2024-11-26 04:12:39.847860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:38.175 [2024-11-26 04:12:39.847878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:38.175 [2024-11-26 04:12:39.847889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:38.175 [2024-11-26 04:12:39.848239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:38.175 [2024-11-26 04:12:39.848260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:38.175 [2024-11-26 04:12:39.848277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:38.175 [2024-11-26 04:12:39.848287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:38.175 passed 00:16:38.175 Test: blockdev nvme passthru rw ...passed 00:16:38.175 Test: blockdev nvme passthru vendor specific ...[2024-11-26 04:12:39.930952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:38.175 [2024-11-26 04:12:39.930976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:38.175 [2024-11-26 04:12:39.931118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:38.175 [2024-11-26 04:12:39.931139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:38.175 [2024-11-26 04:12:39.931257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:38.175 [2024-11-26 04:12:39.931287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:38.175 [2024-11-26 04:12:39.931408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:38.175 [2024-11-26 04:12:39.931428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:38.175 passed 00:16:38.434 Test: blockdev nvme admin passthru ...passed 00:16:38.434 Test: blockdev copy ...passed 00:16:38.434 00:16:38.434 Run Summary: Type Total Ran Passed Failed Inactive 00:16:38.434 suites 1 1 n/a 0 0 00:16:38.434 tests 23 23 23 0 0 00:16:38.434 asserts 152 152 152 0 n/a 00:16:38.434 00:16:38.434 Elapsed time = 0.941 seconds 00:16:38.692 04:12:40 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:38.692 04:12:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.692 04:12:40 -- common/autotest_common.sh@10 -- # set +x 00:16:38.692 04:12:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.692 04:12:40 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:38.692 04:12:40 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:38.692 04:12:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:38.692 04:12:40 -- nvmf/common.sh@116 -- # sync 00:16:38.692 04:12:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:38.692 04:12:40 -- nvmf/common.sh@119 -- # set +e 00:16:38.692 04:12:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:38.692 04:12:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:38.692 rmmod nvme_tcp 00:16:38.692 rmmod nvme_fabrics 00:16:38.692 rmmod nvme_keyring 00:16:38.951 04:12:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:38.951 04:12:40 -- nvmf/common.sh@123 -- # set -e 00:16:38.951 04:12:40 -- nvmf/common.sh@124 -- # return 0 00:16:38.951 04:12:40 -- nvmf/common.sh@477 -- # '[' -n 88168 ']' 00:16:38.951 04:12:40 -- nvmf/common.sh@478 -- # killprocess 88168 00:16:38.951 04:12:40 -- common/autotest_common.sh@936 -- # '[' -z 88168 ']' 00:16:38.951 04:12:40 -- common/autotest_common.sh@940 -- # kill -0 88168 00:16:38.951 04:12:40 -- common/autotest_common.sh@941 -- # uname 00:16:38.951 04:12:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:38.951 04:12:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88168 00:16:38.952 04:12:40 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:38.952 killing process with pid 88168 00:16:38.952 04:12:40 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:38.952 04:12:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88168' 00:16:38.952 04:12:40 -- common/autotest_common.sh@955 -- # kill 88168 00:16:38.952 04:12:40 -- common/autotest_common.sh@960 -- # wait 88168 00:16:39.211 04:12:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:39.211 04:12:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:39.211 04:12:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:39.211 04:12:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:39.211 04:12:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:39.211 04:12:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.211 04:12:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.211 04:12:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.211 04:12:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:39.211 00:16:39.211 real 0m3.714s 00:16:39.211 user 0m13.038s 00:16:39.211 sys 0m1.443s 00:16:39.211 04:12:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:39.211 04:12:40 -- common/autotest_common.sh@10 -- # set +x 00:16:39.211 ************************************ 00:16:39.211 END TEST nvmf_bdevio_no_huge 00:16:39.211 ************************************ 00:16:39.211 04:12:40 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:39.211 04:12:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:39.211 04:12:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:39.211 04:12:40 -- common/autotest_common.sh@10 -- # set +x 00:16:39.211 ************************************ 00:16:39.211 START TEST nvmf_tls 00:16:39.211 ************************************ 00:16:39.211 04:12:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:39.471 * Looking for test storage... 00:16:39.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:39.471 04:12:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:39.471 04:12:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:39.471 04:12:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:39.471 04:12:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:39.471 04:12:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:39.471 04:12:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:39.471 04:12:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:39.471 04:12:41 -- scripts/common.sh@335 -- # IFS=.-: 00:16:39.471 04:12:41 -- scripts/common.sh@335 -- # read -ra ver1 00:16:39.471 04:12:41 -- scripts/common.sh@336 -- # IFS=.-: 00:16:39.471 04:12:41 -- scripts/common.sh@336 -- # read -ra ver2 00:16:39.471 04:12:41 -- scripts/common.sh@337 -- # local 'op=<' 00:16:39.471 04:12:41 -- scripts/common.sh@339 -- # ver1_l=2 00:16:39.471 04:12:41 -- scripts/common.sh@340 -- # ver2_l=1 00:16:39.471 04:12:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:39.471 04:12:41 -- scripts/common.sh@343 -- # case "$op" in 00:16:39.471 04:12:41 -- scripts/common.sh@344 -- # : 1 00:16:39.471 04:12:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:39.471 04:12:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:39.471 04:12:41 -- scripts/common.sh@364 -- # decimal 1 00:16:39.471 04:12:41 -- scripts/common.sh@352 -- # local d=1 00:16:39.471 04:12:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:39.471 04:12:41 -- scripts/common.sh@354 -- # echo 1 00:16:39.471 04:12:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:39.471 04:12:41 -- scripts/common.sh@365 -- # decimal 2 00:16:39.471 04:12:41 -- scripts/common.sh@352 -- # local d=2 00:16:39.471 04:12:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:39.471 04:12:41 -- scripts/common.sh@354 -- # echo 2 00:16:39.471 04:12:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:39.471 04:12:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:39.471 04:12:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:39.471 04:12:41 -- scripts/common.sh@367 -- # return 0 00:16:39.471 04:12:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:39.471 04:12:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:39.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.471 --rc genhtml_branch_coverage=1 00:16:39.471 --rc genhtml_function_coverage=1 00:16:39.471 --rc genhtml_legend=1 00:16:39.471 --rc geninfo_all_blocks=1 00:16:39.471 --rc geninfo_unexecuted_blocks=1 00:16:39.471 00:16:39.471 ' 00:16:39.471 04:12:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:39.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.471 --rc genhtml_branch_coverage=1 00:16:39.471 --rc genhtml_function_coverage=1 00:16:39.471 --rc genhtml_legend=1 00:16:39.471 --rc geninfo_all_blocks=1 00:16:39.471 --rc geninfo_unexecuted_blocks=1 00:16:39.471 00:16:39.471 ' 00:16:39.471 04:12:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:39.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.471 --rc genhtml_branch_coverage=1 00:16:39.471 --rc genhtml_function_coverage=1 00:16:39.471 --rc genhtml_legend=1 00:16:39.471 --rc geninfo_all_blocks=1 00:16:39.471 --rc geninfo_unexecuted_blocks=1 00:16:39.471 00:16:39.471 ' 00:16:39.471 04:12:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:39.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.471 --rc genhtml_branch_coverage=1 00:16:39.471 --rc genhtml_function_coverage=1 00:16:39.471 --rc genhtml_legend=1 00:16:39.471 --rc geninfo_all_blocks=1 00:16:39.471 --rc geninfo_unexecuted_blocks=1 00:16:39.471 00:16:39.471 ' 00:16:39.471 04:12:41 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:39.471 04:12:41 -- nvmf/common.sh@7 -- # uname -s 00:16:39.471 04:12:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.471 04:12:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.471 04:12:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.471 04:12:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.471 04:12:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.471 04:12:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.471 04:12:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.471 04:12:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.471 04:12:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.471 04:12:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.471 04:12:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:16:39.471 04:12:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:16:39.471 04:12:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.471 04:12:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.471 04:12:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:39.471 04:12:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:39.471 04:12:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.471 04:12:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.471 04:12:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.471 04:12:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.471 04:12:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.472 04:12:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.472 04:12:41 -- paths/export.sh@5 -- # export PATH 00:16:39.472 04:12:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.472 04:12:41 -- nvmf/common.sh@46 -- # : 0 00:16:39.472 04:12:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:39.472 04:12:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:39.472 04:12:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:39.472 04:12:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.472 04:12:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.472 04:12:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:39.472 04:12:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:39.472 04:12:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:39.472 04:12:41 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:39.472 04:12:41 -- target/tls.sh@71 -- # nvmftestinit 00:16:39.472 04:12:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:39.472 04:12:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.472 04:12:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:39.472 04:12:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:39.472 04:12:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:39.472 04:12:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.472 04:12:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.472 04:12:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.472 04:12:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:39.472 04:12:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:39.472 04:12:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:39.472 04:12:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:39.472 04:12:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:39.472 04:12:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:39.472 04:12:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.472 04:12:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:39.472 04:12:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:39.472 04:12:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:39.472 04:12:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:39.472 04:12:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:39.472 04:12:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:39.472 04:12:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.472 04:12:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:39.472 04:12:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:39.472 04:12:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:39.472 04:12:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:39.472 04:12:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:39.472 04:12:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:39.472 Cannot find device "nvmf_tgt_br" 00:16:39.472 04:12:41 -- nvmf/common.sh@154 -- # true 00:16:39.472 04:12:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:39.472 Cannot find device "nvmf_tgt_br2" 00:16:39.472 04:12:41 -- nvmf/common.sh@155 -- # true 00:16:39.472 04:12:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:39.472 04:12:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:39.472 Cannot find device "nvmf_tgt_br" 00:16:39.472 04:12:41 -- nvmf/common.sh@157 -- # true 00:16:39.472 04:12:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:39.472 Cannot find device "nvmf_tgt_br2" 00:16:39.472 04:12:41 -- nvmf/common.sh@158 -- # true 00:16:39.472 04:12:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:39.472 04:12:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:39.731 04:12:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:39.731 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:39.731 04:12:41 -- nvmf/common.sh@161 -- # true 00:16:39.731 04:12:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:39.731 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:39.731 04:12:41 -- nvmf/common.sh@162 -- # true 00:16:39.731 04:12:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:39.731 04:12:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:39.731 04:12:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:39.731 04:12:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:39.731 04:12:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:39.731 04:12:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:39.731 04:12:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:39.731 04:12:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:39.731 04:12:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:39.731 04:12:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:39.731 04:12:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:39.731 04:12:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:39.731 04:12:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:39.731 04:12:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:39.731 04:12:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:39.731 04:12:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:39.731 04:12:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:39.731 04:12:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:39.731 04:12:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:39.731 04:12:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:39.731 04:12:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:39.731 04:12:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:39.731 04:12:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:39.731 04:12:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:39.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:16:39.731 00:16:39.731 --- 10.0.0.2 ping statistics --- 00:16:39.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.731 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:16:39.731 04:12:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:39.731 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:39.731 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:16:39.731 00:16:39.731 --- 10.0.0.3 ping statistics --- 00:16:39.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.731 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:16:39.731 04:12:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:39.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:39.731 00:16:39.731 --- 10.0.0.1 ping statistics --- 00:16:39.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.731 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:39.731 04:12:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.731 04:12:41 -- nvmf/common.sh@421 -- # return 0 00:16:39.731 04:12:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:39.731 04:12:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.731 04:12:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:39.731 04:12:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:39.731 04:12:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.731 04:12:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:39.731 04:12:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:39.731 04:12:41 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:39.731 04:12:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:39.731 04:12:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:39.731 04:12:41 -- common/autotest_common.sh@10 -- # set +x 00:16:39.991 04:12:41 -- nvmf/common.sh@469 -- # nvmfpid=88421 00:16:39.991 04:12:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:39.991 04:12:41 -- nvmf/common.sh@470 -- # waitforlisten 88421 00:16:39.991 04:12:41 -- common/autotest_common.sh@829 -- # '[' -z 88421 ']' 00:16:39.991 04:12:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.991 04:12:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:39.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.991 04:12:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.991 04:12:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:39.991 04:12:41 -- common/autotest_common.sh@10 -- # set +x 00:16:39.991 [2024-11-26 04:12:41.552937] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:39.991 [2024-11-26 04:12:41.553025] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.991 [2024-11-26 04:12:41.697601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.251 [2024-11-26 04:12:41.796162] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:40.251 [2024-11-26 04:12:41.796340] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.251 [2024-11-26 04:12:41.796357] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.251 [2024-11-26 04:12:41.796369] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.251 [2024-11-26 04:12:41.796401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.185 04:12:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:41.185 04:12:42 -- common/autotest_common.sh@862 -- # return 0 00:16:41.185 04:12:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:41.185 04:12:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:41.185 04:12:42 -- common/autotest_common.sh@10 -- # set +x 00:16:41.185 04:12:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.185 04:12:42 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:41.185 04:12:42 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:41.185 true 00:16:41.185 04:12:42 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:41.185 04:12:42 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:41.443 04:12:43 -- target/tls.sh@82 -- # version=0 00:16:41.443 04:12:43 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:41.443 04:12:43 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:41.701 04:12:43 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:41.701 04:12:43 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:41.959 04:12:43 -- target/tls.sh@90 -- # version=13 00:16:41.959 04:12:43 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:41.959 04:12:43 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:42.217 04:12:43 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:42.217 04:12:43 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:42.475 04:12:44 -- target/tls.sh@98 -- # version=7 00:16:42.475 04:12:44 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:42.475 04:12:44 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:42.475 04:12:44 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:42.475 04:12:44 -- target/tls.sh@105 -- # ktls=false 00:16:42.476 04:12:44 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:42.476 04:12:44 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:42.732 04:12:44 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:42.732 04:12:44 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:42.990 04:12:44 -- target/tls.sh@113 -- # ktls=true 00:16:42.990 04:12:44 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:42.990 04:12:44 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:43.249 04:12:44 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:43.249 04:12:44 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:43.508 04:12:45 -- target/tls.sh@121 -- # ktls=false 00:16:43.508 04:12:45 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:43.508 04:12:45 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:43.508 04:12:45 -- target/tls.sh@49 -- # local key hash crc 00:16:43.508 04:12:45 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:43.508 04:12:45 -- target/tls.sh@51 -- # hash=01 00:16:43.508 04:12:45 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:43.508 04:12:45 -- target/tls.sh@52 -- # gzip -1 -c 00:16:43.508 04:12:45 -- target/tls.sh@52 -- # head -c 4 00:16:43.508 04:12:45 -- target/tls.sh@52 -- # tail -c8 00:16:43.508 04:12:45 -- target/tls.sh@52 -- # crc='p$H�' 00:16:43.508 04:12:45 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:43.508 04:12:45 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:43.508 04:12:45 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:43.508 04:12:45 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:43.508 04:12:45 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:43.508 04:12:45 -- target/tls.sh@49 -- # local key hash crc 00:16:43.508 04:12:45 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:43.508 04:12:45 -- target/tls.sh@51 -- # hash=01 00:16:43.508 04:12:45 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:43.508 04:12:45 -- target/tls.sh@52 -- # gzip -1 -c 00:16:43.508 04:12:45 -- target/tls.sh@52 -- # tail -c8 00:16:43.508 04:12:45 -- target/tls.sh@52 -- # head -c 4 00:16:43.508 04:12:45 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:43.508 04:12:45 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:43.508 04:12:45 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:43.508 04:12:45 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:43.508 04:12:45 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:43.508 04:12:45 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:43.508 04:12:45 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:43.508 04:12:45 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:43.508 04:12:45 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:43.508 04:12:45 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:43.508 04:12:45 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:43.508 04:12:45 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:43.766 04:12:45 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:44.333 04:12:45 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:44.333 04:12:45 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:44.333 04:12:45 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:44.333 [2024-11-26 04:12:46.007155] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.333 04:12:46 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:44.591 04:12:46 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:44.849 [2024-11-26 04:12:46.419228] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:44.849 [2024-11-26 04:12:46.419458] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:44.849 04:12:46 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:45.108 malloc0 00:16:45.108 04:12:46 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:45.366 04:12:47 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:45.624 04:12:47 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:57.830 Initializing NVMe Controllers 00:16:57.830 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:57.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:57.830 Initialization complete. Launching workers. 00:16:57.830 ======================================================== 00:16:57.830 Latency(us) 00:16:57.830 Device Information : IOPS MiB/s Average min max 00:16:57.830 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9551.40 37.31 6702.30 1386.01 15633.58 00:16:57.830 ======================================================== 00:16:57.830 Total : 9551.40 37.31 6702.30 1386.01 15633.58 00:16:57.830 00:16:57.830 04:12:57 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:57.830 04:12:57 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:57.830 04:12:57 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:57.830 04:12:57 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:57.830 04:12:57 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:57.830 04:12:57 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:57.830 04:12:57 -- target/tls.sh@28 -- # bdevperf_pid=88786 00:16:57.830 04:12:57 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:57.830 04:12:57 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:57.830 04:12:57 -- target/tls.sh@31 -- # waitforlisten 88786 /var/tmp/bdevperf.sock 00:16:57.830 04:12:57 -- common/autotest_common.sh@829 -- # '[' -z 88786 ']' 00:16:57.830 04:12:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:57.830 04:12:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:57.830 04:12:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:57.830 04:12:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.830 04:12:57 -- common/autotest_common.sh@10 -- # set +x 00:16:57.830 [2024-11-26 04:12:57.478954] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:57.830 [2024-11-26 04:12:57.479046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88786 ] 00:16:57.830 [2024-11-26 04:12:57.623609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.830 [2024-11-26 04:12:57.706722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.830 04:12:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.830 04:12:58 -- common/autotest_common.sh@862 -- # return 0 00:16:57.830 04:12:58 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:57.830 [2024-11-26 04:12:58.684929] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:57.830 TLSTESTn1 00:16:57.830 04:12:58 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:57.830 Running I/O for 10 seconds... 00:17:07.871 00:17:07.871 Latency(us) 00:17:07.871 [2024-11-26T04:13:09.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.871 [2024-11-26T04:13:09.639Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:07.871 Verification LBA range: start 0x0 length 0x2000 00:17:07.871 TLSTESTn1 : 10.01 6528.52 25.50 0.00 0.00 19576.45 4527.94 21209.83 00:17:07.871 [2024-11-26T04:13:09.639Z] =================================================================================================================== 00:17:07.871 [2024-11-26T04:13:09.639Z] Total : 6528.52 25.50 0.00 0.00 19576.45 4527.94 21209.83 00:17:07.871 0 00:17:07.871 04:13:08 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:07.871 04:13:08 -- target/tls.sh@45 -- # killprocess 88786 00:17:07.871 04:13:08 -- common/autotest_common.sh@936 -- # '[' -z 88786 ']' 00:17:07.871 04:13:08 -- common/autotest_common.sh@940 -- # kill -0 88786 00:17:07.871 04:13:08 -- common/autotest_common.sh@941 -- # uname 00:17:07.871 04:13:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:07.871 04:13:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88786 00:17:07.871 04:13:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:07.871 04:13:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:07.871 04:13:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88786' 00:17:07.871 killing process with pid 88786 00:17:07.871 04:13:08 -- common/autotest_common.sh@955 -- # kill 88786 00:17:07.871 Received shutdown signal, test time was about 10.000000 seconds 00:17:07.871 00:17:07.871 Latency(us) 00:17:07.871 [2024-11-26T04:13:09.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.871 [2024-11-26T04:13:09.639Z] =================================================================================================================== 00:17:07.871 [2024-11-26T04:13:09.639Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:07.871 04:13:08 -- common/autotest_common.sh@960 -- # wait 88786 00:17:07.871 04:13:09 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:07.871 04:13:09 -- common/autotest_common.sh@650 -- # local es=0 00:17:07.871 04:13:09 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:07.871 04:13:09 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:07.871 04:13:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:07.871 04:13:09 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:07.871 04:13:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:07.871 04:13:09 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:07.871 04:13:09 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:07.871 04:13:09 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:07.871 04:13:09 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:07.871 04:13:09 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:17:07.871 04:13:09 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:07.871 04:13:09 -- target/tls.sh@28 -- # bdevperf_pid=88939 00:17:07.871 04:13:09 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:07.871 04:13:09 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:07.871 04:13:09 -- target/tls.sh@31 -- # waitforlisten 88939 /var/tmp/bdevperf.sock 00:17:07.871 04:13:09 -- common/autotest_common.sh@829 -- # '[' -z 88939 ']' 00:17:07.871 04:13:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:07.871 04:13:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:07.871 04:13:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:07.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:07.871 04:13:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:07.871 04:13:09 -- common/autotest_common.sh@10 -- # set +x 00:17:07.871 [2024-11-26 04:13:09.232862] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:07.871 [2024-11-26 04:13:09.232978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88939 ] 00:17:07.871 [2024-11-26 04:13:09.363830] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.871 [2024-11-26 04:13:09.444418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.808 04:13:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:08.808 04:13:10 -- common/autotest_common.sh@862 -- # return 0 00:17:08.808 04:13:10 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:08.808 [2024-11-26 04:13:10.413005] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:08.808 [2024-11-26 04:13:10.423713] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:08.808 [2024-11-26 04:13:10.424341] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc09cc0 (107): Transport endpoint is not connected 00:17:08.808 [2024-11-26 04:13:10.425325] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc09cc0 (9): Bad file descriptor 00:17:08.808 [2024-11-26 04:13:10.426321] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:08.808 [2024-11-26 04:13:10.426342] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:08.808 [2024-11-26 04:13:10.426366] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:08.808 2024/11/26 04:13:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:08.808 request: 00:17:08.808 { 00:17:08.808 "method": "bdev_nvme_attach_controller", 00:17:08.808 "params": { 00:17:08.808 "name": "TLSTEST", 00:17:08.808 "trtype": "tcp", 00:17:08.808 "traddr": "10.0.0.2", 00:17:08.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:08.808 "adrfam": "ipv4", 00:17:08.808 "trsvcid": "4420", 00:17:08.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:08.808 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:17:08.808 } 00:17:08.808 } 00:17:08.808 Got JSON-RPC error response 00:17:08.808 GoRPCClient: error on JSON-RPC call 00:17:08.808 04:13:10 -- target/tls.sh@36 -- # killprocess 88939 00:17:08.808 04:13:10 -- common/autotest_common.sh@936 -- # '[' -z 88939 ']' 00:17:08.808 04:13:10 -- common/autotest_common.sh@940 -- # kill -0 88939 00:17:08.808 04:13:10 -- common/autotest_common.sh@941 -- # uname 00:17:08.808 04:13:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:08.808 04:13:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88939 00:17:08.808 killing process with pid 88939 00:17:08.808 Received shutdown signal, test time was about 10.000000 seconds 00:17:08.808 00:17:08.808 Latency(us) 00:17:08.808 [2024-11-26T04:13:10.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.808 [2024-11-26T04:13:10.576Z] =================================================================================================================== 00:17:08.808 [2024-11-26T04:13:10.576Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:08.808 04:13:10 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:08.808 04:13:10 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:08.808 04:13:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88939' 00:17:08.808 04:13:10 -- common/autotest_common.sh@955 -- # kill 88939 00:17:08.808 04:13:10 -- common/autotest_common.sh@960 -- # wait 88939 00:17:09.068 04:13:10 -- target/tls.sh@37 -- # return 1 00:17:09.068 04:13:10 -- common/autotest_common.sh@653 -- # es=1 00:17:09.068 04:13:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:09.068 04:13:10 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:09.068 04:13:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:09.068 04:13:10 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:09.068 04:13:10 -- common/autotest_common.sh@650 -- # local es=0 00:17:09.068 04:13:10 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:09.068 04:13:10 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:09.068 04:13:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.068 04:13:10 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:09.068 04:13:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.068 04:13:10 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:09.068 04:13:10 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:09.068 04:13:10 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:09.068 04:13:10 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:09.068 04:13:10 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:09.068 04:13:10 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:09.068 04:13:10 -- target/tls.sh@28 -- # bdevperf_pid=88984 00:17:09.068 04:13:10 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:09.068 04:13:10 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:09.068 04:13:10 -- target/tls.sh@31 -- # waitforlisten 88984 /var/tmp/bdevperf.sock 00:17:09.068 04:13:10 -- common/autotest_common.sh@829 -- # '[' -z 88984 ']' 00:17:09.068 04:13:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:09.068 04:13:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:09.068 04:13:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:09.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:09.068 04:13:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:09.068 04:13:10 -- common/autotest_common.sh@10 -- # set +x 00:17:09.068 [2024-11-26 04:13:10.792343] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:09.068 [2024-11-26 04:13:10.792449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88984 ] 00:17:09.327 [2024-11-26 04:13:10.930218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.327 [2024-11-26 04:13:11.004753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.264 04:13:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:10.264 04:13:11 -- common/autotest_common.sh@862 -- # return 0 00:17:10.264 04:13:11 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:10.264 [2024-11-26 04:13:12.022655] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:10.523 [2024-11-26 04:13:12.033711] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:10.523 [2024-11-26 04:13:12.033789] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:10.523 [2024-11-26 04:13:12.033852] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:10.523 [2024-11-26 04:13:12.034358] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x745cc0 (107): Transport endpoint is not connected 00:17:10.523 [2024-11-26 04:13:12.035333] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x745cc0 (9): Bad file descriptor 00:17:10.523 [2024-11-26 04:13:12.036330] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:10.523 [2024-11-26 04:13:12.036349] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:10.523 [2024-11-26 04:13:12.036358] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:10.523 2024/11/26 04:13:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:10.523 request: 00:17:10.523 { 00:17:10.523 "method": "bdev_nvme_attach_controller", 00:17:10.523 "params": { 00:17:10.523 "name": "TLSTEST", 00:17:10.523 "trtype": "tcp", 00:17:10.523 "traddr": "10.0.0.2", 00:17:10.523 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:10.523 "adrfam": "ipv4", 00:17:10.523 "trsvcid": "4420", 00:17:10.523 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.523 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:10.523 } 00:17:10.523 } 00:17:10.523 Got JSON-RPC error response 00:17:10.523 GoRPCClient: error on JSON-RPC call 00:17:10.523 04:13:12 -- target/tls.sh@36 -- # killprocess 88984 00:17:10.523 04:13:12 -- common/autotest_common.sh@936 -- # '[' -z 88984 ']' 00:17:10.523 04:13:12 -- common/autotest_common.sh@940 -- # kill -0 88984 00:17:10.523 04:13:12 -- common/autotest_common.sh@941 -- # uname 00:17:10.523 04:13:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:10.523 04:13:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88984 00:17:10.523 killing process with pid 88984 00:17:10.523 Received shutdown signal, test time was about 10.000000 seconds 00:17:10.523 00:17:10.523 Latency(us) 00:17:10.523 [2024-11-26T04:13:12.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.523 [2024-11-26T04:13:12.291Z] =================================================================================================================== 00:17:10.523 [2024-11-26T04:13:12.291Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:10.523 04:13:12 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:10.523 04:13:12 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:10.523 04:13:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88984' 00:17:10.523 04:13:12 -- common/autotest_common.sh@955 -- # kill 88984 00:17:10.523 04:13:12 -- common/autotest_common.sh@960 -- # wait 88984 00:17:10.782 04:13:12 -- target/tls.sh@37 -- # return 1 00:17:10.782 04:13:12 -- common/autotest_common.sh@653 -- # es=1 00:17:10.782 04:13:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:10.782 04:13:12 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:10.782 04:13:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:10.782 04:13:12 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:10.782 04:13:12 -- common/autotest_common.sh@650 -- # local es=0 00:17:10.782 04:13:12 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:10.782 04:13:12 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:10.782 04:13:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:10.782 04:13:12 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:10.782 04:13:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:10.782 04:13:12 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:10.782 04:13:12 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:10.782 04:13:12 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:10.782 04:13:12 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:10.782 04:13:12 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:10.782 04:13:12 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:10.782 04:13:12 -- target/tls.sh@28 -- # bdevperf_pid=89030 00:17:10.782 04:13:12 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:10.782 04:13:12 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:10.782 04:13:12 -- target/tls.sh@31 -- # waitforlisten 89030 /var/tmp/bdevperf.sock 00:17:10.782 04:13:12 -- common/autotest_common.sh@829 -- # '[' -z 89030 ']' 00:17:10.782 04:13:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:10.782 04:13:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:10.782 04:13:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:10.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:10.782 04:13:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:10.782 04:13:12 -- common/autotest_common.sh@10 -- # set +x 00:17:10.782 [2024-11-26 04:13:12.420293] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:10.782 [2024-11-26 04:13:12.420585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89030 ] 00:17:11.041 [2024-11-26 04:13:12.558896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.041 [2024-11-26 04:13:12.617103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.977 04:13:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:11.977 04:13:13 -- common/autotest_common.sh@862 -- # return 0 00:17:11.977 04:13:13 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:11.977 [2024-11-26 04:13:13.596463] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:11.977 [2024-11-26 04:13:13.604012] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:11.977 [2024-11-26 04:13:13.604209] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:11.977 [2024-11-26 04:13:13.604385] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:11.977 [2024-11-26 04:13:13.605152] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e42cc0 (107): Transport endpoint is not connected 00:17:11.977 [2024-11-26 04:13:13.606143] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e42cc0 (9): Bad file descriptor 00:17:11.977 [2024-11-26 04:13:13.607138] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:11.977 [2024-11-26 04:13:13.607162] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:11.977 [2024-11-26 04:13:13.607171] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:11.977 2024/11/26 04:13:13 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:11.977 request: 00:17:11.977 { 00:17:11.977 "method": "bdev_nvme_attach_controller", 00:17:11.977 "params": { 00:17:11.977 "name": "TLSTEST", 00:17:11.977 "trtype": "tcp", 00:17:11.977 "traddr": "10.0.0.2", 00:17:11.977 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:11.977 "adrfam": "ipv4", 00:17:11.977 "trsvcid": "4420", 00:17:11.977 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:11.977 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:11.977 } 00:17:11.977 } 00:17:11.977 Got JSON-RPC error response 00:17:11.977 GoRPCClient: error on JSON-RPC call 00:17:11.977 04:13:13 -- target/tls.sh@36 -- # killprocess 89030 00:17:11.977 04:13:13 -- common/autotest_common.sh@936 -- # '[' -z 89030 ']' 00:17:11.977 04:13:13 -- common/autotest_common.sh@940 -- # kill -0 89030 00:17:11.977 04:13:13 -- common/autotest_common.sh@941 -- # uname 00:17:11.977 04:13:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:11.977 04:13:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89030 00:17:11.977 04:13:13 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:11.977 04:13:13 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:11.977 killing process with pid 89030 00:17:11.977 04:13:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89030' 00:17:11.977 Received shutdown signal, test time was about 10.000000 seconds 00:17:11.978 00:17:11.978 Latency(us) 00:17:11.978 [2024-11-26T04:13:13.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.978 [2024-11-26T04:13:13.746Z] =================================================================================================================== 00:17:11.978 [2024-11-26T04:13:13.746Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:11.978 04:13:13 -- common/autotest_common.sh@955 -- # kill 89030 00:17:11.978 04:13:13 -- common/autotest_common.sh@960 -- # wait 89030 00:17:12.236 04:13:13 -- target/tls.sh@37 -- # return 1 00:17:12.236 04:13:13 -- common/autotest_common.sh@653 -- # es=1 00:17:12.236 04:13:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:12.236 04:13:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:12.236 04:13:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:12.237 04:13:13 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:12.237 04:13:13 -- common/autotest_common.sh@650 -- # local es=0 00:17:12.237 04:13:13 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:12.237 04:13:13 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:12.237 04:13:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:12.237 04:13:13 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:12.237 04:13:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:12.237 04:13:13 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:12.237 04:13:13 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:12.237 04:13:13 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:12.237 04:13:13 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:12.237 04:13:13 -- target/tls.sh@23 -- # psk= 00:17:12.237 04:13:13 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:12.237 04:13:13 -- target/tls.sh@28 -- # bdevperf_pid=89070 00:17:12.237 04:13:13 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:12.237 04:13:13 -- target/tls.sh@31 -- # waitforlisten 89070 /var/tmp/bdevperf.sock 00:17:12.237 04:13:13 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:12.237 04:13:13 -- common/autotest_common.sh@829 -- # '[' -z 89070 ']' 00:17:12.237 04:13:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:12.237 04:13:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:12.237 04:13:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:12.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:12.237 04:13:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:12.237 04:13:13 -- common/autotest_common.sh@10 -- # set +x 00:17:12.237 [2024-11-26 04:13:13.961157] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:12.237 [2024-11-26 04:13:13.961263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89070 ] 00:17:12.498 [2024-11-26 04:13:14.101770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.498 [2024-11-26 04:13:14.187630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.435 04:13:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:13.435 04:13:14 -- common/autotest_common.sh@862 -- # return 0 00:17:13.435 04:13:14 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:13.435 [2024-11-26 04:13:15.174219] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:13.435 [2024-11-26 04:13:15.175830] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23af8c0 (9): Bad file descriptor 00:17:13.435 [2024-11-26 04:13:15.176823] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:13.435 [2024-11-26 04:13:15.176860] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:13.435 [2024-11-26 04:13:15.176869] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:13.435 2024/11/26 04:13:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:13.435 request: 00:17:13.435 { 00:17:13.435 "method": "bdev_nvme_attach_controller", 00:17:13.435 "params": { 00:17:13.435 "name": "TLSTEST", 00:17:13.435 "trtype": "tcp", 00:17:13.435 "traddr": "10.0.0.2", 00:17:13.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:13.435 "adrfam": "ipv4", 00:17:13.435 "trsvcid": "4420", 00:17:13.435 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:17:13.435 } 00:17:13.435 } 00:17:13.435 Got JSON-RPC error response 00:17:13.435 GoRPCClient: error on JSON-RPC call 00:17:13.694 04:13:15 -- target/tls.sh@36 -- # killprocess 89070 00:17:13.694 04:13:15 -- common/autotest_common.sh@936 -- # '[' -z 89070 ']' 00:17:13.694 04:13:15 -- common/autotest_common.sh@940 -- # kill -0 89070 00:17:13.694 04:13:15 -- common/autotest_common.sh@941 -- # uname 00:17:13.694 04:13:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:13.694 04:13:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89070 00:17:13.694 04:13:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:13.694 04:13:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:13.694 killing process with pid 89070 00:17:13.694 04:13:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89070' 00:17:13.694 04:13:15 -- common/autotest_common.sh@955 -- # kill 89070 00:17:13.694 Received shutdown signal, test time was about 10.000000 seconds 00:17:13.694 00:17:13.694 Latency(us) 00:17:13.694 [2024-11-26T04:13:15.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.694 [2024-11-26T04:13:15.462Z] =================================================================================================================== 00:17:13.694 [2024-11-26T04:13:15.462Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:13.694 04:13:15 -- common/autotest_common.sh@960 -- # wait 89070 00:17:13.953 04:13:15 -- target/tls.sh@37 -- # return 1 00:17:13.953 04:13:15 -- common/autotest_common.sh@653 -- # es=1 00:17:13.953 04:13:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:13.953 04:13:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:13.953 04:13:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:13.953 04:13:15 -- target/tls.sh@167 -- # killprocess 88421 00:17:13.953 04:13:15 -- common/autotest_common.sh@936 -- # '[' -z 88421 ']' 00:17:13.953 04:13:15 -- common/autotest_common.sh@940 -- # kill -0 88421 00:17:13.953 04:13:15 -- common/autotest_common.sh@941 -- # uname 00:17:13.953 04:13:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:13.953 04:13:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88421 00:17:13.953 04:13:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:13.953 04:13:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:13.953 killing process with pid 88421 00:17:13.953 04:13:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88421' 00:17:13.953 04:13:15 -- common/autotest_common.sh@955 -- # kill 88421 00:17:13.953 04:13:15 -- common/autotest_common.sh@960 -- # wait 88421 00:17:13.953 04:13:15 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:17:13.953 04:13:15 -- target/tls.sh@49 -- # local key hash crc 00:17:13.953 04:13:15 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:13.953 04:13:15 -- target/tls.sh@51 -- # hash=02 00:17:13.953 04:13:15 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:17:13.953 04:13:15 -- target/tls.sh@52 -- # gzip -1 -c 00:17:13.953 04:13:15 -- target/tls.sh@52 -- # tail -c8 00:17:13.953 04:13:15 -- target/tls.sh@52 -- # head -c 4 00:17:13.953 04:13:15 -- target/tls.sh@52 -- # crc='�e�'\''' 00:17:13.953 04:13:15 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:13.953 04:13:15 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:17:14.212 04:13:15 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:14.212 04:13:15 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:14.212 04:13:15 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:14.212 04:13:15 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:14.212 04:13:15 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:14.212 04:13:15 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:17:14.212 04:13:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:14.212 04:13:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:14.212 04:13:15 -- common/autotest_common.sh@10 -- # set +x 00:17:14.212 04:13:15 -- nvmf/common.sh@469 -- # nvmfpid=89136 00:17:14.212 04:13:15 -- nvmf/common.sh@470 -- # waitforlisten 89136 00:17:14.212 04:13:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:14.212 04:13:15 -- common/autotest_common.sh@829 -- # '[' -z 89136 ']' 00:17:14.212 04:13:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.212 04:13:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:14.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.212 04:13:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.212 04:13:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:14.212 04:13:15 -- common/autotest_common.sh@10 -- # set +x 00:17:14.212 [2024-11-26 04:13:15.787792] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:14.212 [2024-11-26 04:13:15.787896] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.212 [2024-11-26 04:13:15.930254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.470 [2024-11-26 04:13:15.985275] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:14.470 [2024-11-26 04:13:15.985438] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.470 [2024-11-26 04:13:15.985451] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.470 [2024-11-26 04:13:15.985459] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.470 [2024-11-26 04:13:15.985489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.037 04:13:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:15.037 04:13:16 -- common/autotest_common.sh@862 -- # return 0 00:17:15.037 04:13:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:15.037 04:13:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:15.037 04:13:16 -- common/autotest_common.sh@10 -- # set +x 00:17:15.295 04:13:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.295 04:13:16 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:15.295 04:13:16 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:15.295 04:13:16 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:15.553 [2024-11-26 04:13:17.081426] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.553 04:13:17 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:15.553 04:13:17 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:16.120 [2024-11-26 04:13:17.581482] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:16.120 [2024-11-26 04:13:17.581685] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.120 04:13:17 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:16.120 malloc0 00:17:16.379 04:13:17 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:16.637 04:13:18 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:16.897 04:13:18 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:16.897 04:13:18 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:16.897 04:13:18 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:16.897 04:13:18 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:16.897 04:13:18 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:16.897 04:13:18 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:16.897 04:13:18 -- target/tls.sh@28 -- # bdevperf_pid=89239 00:17:16.897 04:13:18 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:16.897 04:13:18 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:16.897 04:13:18 -- target/tls.sh@31 -- # waitforlisten 89239 /var/tmp/bdevperf.sock 00:17:16.897 04:13:18 -- common/autotest_common.sh@829 -- # '[' -z 89239 ']' 00:17:16.897 04:13:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:16.897 04:13:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:16.897 04:13:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:16.897 04:13:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.897 04:13:18 -- common/autotest_common.sh@10 -- # set +x 00:17:16.897 [2024-11-26 04:13:18.482831] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:16.897 [2024-11-26 04:13:18.482899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89239 ] 00:17:16.897 [2024-11-26 04:13:18.615240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.156 [2024-11-26 04:13:18.690331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.091 04:13:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:18.091 04:13:19 -- common/autotest_common.sh@862 -- # return 0 00:17:18.091 04:13:19 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:18.091 [2024-11-26 04:13:19.715833] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:18.091 TLSTESTn1 00:17:18.091 04:13:19 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:18.349 Running I/O for 10 seconds... 00:17:28.322 00:17:28.322 Latency(us) 00:17:28.322 [2024-11-26T04:13:30.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.322 [2024-11-26T04:13:30.090Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:28.322 Verification LBA range: start 0x0 length 0x2000 00:17:28.322 TLSTESTn1 : 10.01 6563.09 25.64 0.00 0.00 19473.57 4349.21 24188.74 00:17:28.322 [2024-11-26T04:13:30.090Z] =================================================================================================================== 00:17:28.322 [2024-11-26T04:13:30.090Z] Total : 6563.09 25.64 0.00 0.00 19473.57 4349.21 24188.74 00:17:28.322 0 00:17:28.322 04:13:29 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:28.322 04:13:29 -- target/tls.sh@45 -- # killprocess 89239 00:17:28.322 04:13:29 -- common/autotest_common.sh@936 -- # '[' -z 89239 ']' 00:17:28.322 04:13:29 -- common/autotest_common.sh@940 -- # kill -0 89239 00:17:28.322 04:13:29 -- common/autotest_common.sh@941 -- # uname 00:17:28.322 04:13:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:28.322 04:13:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89239 00:17:28.322 04:13:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:28.322 04:13:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:28.322 killing process with pid 89239 00:17:28.322 04:13:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89239' 00:17:28.322 Received shutdown signal, test time was about 10.000000 seconds 00:17:28.322 00:17:28.322 Latency(us) 00:17:28.322 [2024-11-26T04:13:30.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.322 [2024-11-26T04:13:30.090Z] =================================================================================================================== 00:17:28.322 [2024-11-26T04:13:30.090Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:28.322 04:13:29 -- common/autotest_common.sh@955 -- # kill 89239 00:17:28.322 04:13:29 -- common/autotest_common.sh@960 -- # wait 89239 00:17:28.580 04:13:30 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:28.580 04:13:30 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:28.580 04:13:30 -- common/autotest_common.sh@650 -- # local es=0 00:17:28.580 04:13:30 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:28.580 04:13:30 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:28.580 04:13:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:28.580 04:13:30 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:28.580 04:13:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:28.580 04:13:30 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:28.580 04:13:30 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:28.580 04:13:30 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:28.580 04:13:30 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:28.581 04:13:30 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:28.581 04:13:30 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:28.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:28.581 04:13:30 -- target/tls.sh@28 -- # bdevperf_pid=89390 00:17:28.581 04:13:30 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:28.581 04:13:30 -- target/tls.sh@31 -- # waitforlisten 89390 /var/tmp/bdevperf.sock 00:17:28.581 04:13:30 -- common/autotest_common.sh@829 -- # '[' -z 89390 ']' 00:17:28.581 04:13:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:28.581 04:13:30 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:28.581 04:13:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:28.581 04:13:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:28.581 04:13:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:28.581 04:13:30 -- common/autotest_common.sh@10 -- # set +x 00:17:28.581 [2024-11-26 04:13:30.271206] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:28.581 [2024-11-26 04:13:30.271311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89390 ] 00:17:28.840 [2024-11-26 04:13:30.410496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.840 [2024-11-26 04:13:30.480169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.777 04:13:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:29.777 04:13:31 -- common/autotest_common.sh@862 -- # return 0 00:17:29.777 04:13:31 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:29.777 [2024-11-26 04:13:31.430329] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:29.777 [2024-11-26 04:13:31.430399] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:29.777 2024/11/26 04:13:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:29.777 request: 00:17:29.777 { 00:17:29.777 "method": "bdev_nvme_attach_controller", 00:17:29.777 "params": { 00:17:29.777 "name": "TLSTEST", 00:17:29.777 "trtype": "tcp", 00:17:29.777 "traddr": "10.0.0.2", 00:17:29.777 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:29.777 "adrfam": "ipv4", 00:17:29.777 "trsvcid": "4420", 00:17:29.777 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.777 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:29.777 } 00:17:29.777 } 00:17:29.777 Got JSON-RPC error response 00:17:29.777 GoRPCClient: error on JSON-RPC call 00:17:29.777 04:13:31 -- target/tls.sh@36 -- # killprocess 89390 00:17:29.777 04:13:31 -- common/autotest_common.sh@936 -- # '[' -z 89390 ']' 00:17:29.777 04:13:31 -- common/autotest_common.sh@940 -- # kill -0 89390 00:17:29.777 04:13:31 -- common/autotest_common.sh@941 -- # uname 00:17:29.777 04:13:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:29.777 04:13:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89390 00:17:29.777 04:13:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:29.777 04:13:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:29.777 killing process with pid 89390 00:17:29.777 04:13:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89390' 00:17:29.777 04:13:31 -- common/autotest_common.sh@955 -- # kill 89390 00:17:29.777 Received shutdown signal, test time was about 10.000000 seconds 00:17:29.777 00:17:29.777 Latency(us) 00:17:29.777 [2024-11-26T04:13:31.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.777 [2024-11-26T04:13:31.545Z] =================================================================================================================== 00:17:29.777 [2024-11-26T04:13:31.545Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:29.777 04:13:31 -- common/autotest_common.sh@960 -- # wait 89390 00:17:30.037 04:13:31 -- target/tls.sh@37 -- # return 1 00:17:30.037 04:13:31 -- common/autotest_common.sh@653 -- # es=1 00:17:30.037 04:13:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:30.037 04:13:31 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:30.037 04:13:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:30.037 04:13:31 -- target/tls.sh@183 -- # killprocess 89136 00:17:30.037 04:13:31 -- common/autotest_common.sh@936 -- # '[' -z 89136 ']' 00:17:30.037 04:13:31 -- common/autotest_common.sh@940 -- # kill -0 89136 00:17:30.037 04:13:31 -- common/autotest_common.sh@941 -- # uname 00:17:30.037 04:13:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:30.037 04:13:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89136 00:17:30.037 04:13:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:30.037 04:13:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:30.037 killing process with pid 89136 00:17:30.037 04:13:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89136' 00:17:30.037 04:13:31 -- common/autotest_common.sh@955 -- # kill 89136 00:17:30.037 04:13:31 -- common/autotest_common.sh@960 -- # wait 89136 00:17:30.296 04:13:31 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:30.296 04:13:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:30.296 04:13:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:30.296 04:13:31 -- common/autotest_common.sh@10 -- # set +x 00:17:30.296 04:13:31 -- nvmf/common.sh@469 -- # nvmfpid=89442 00:17:30.296 04:13:31 -- nvmf/common.sh@470 -- # waitforlisten 89442 00:17:30.296 04:13:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:30.296 04:13:31 -- common/autotest_common.sh@829 -- # '[' -z 89442 ']' 00:17:30.296 04:13:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.296 04:13:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:30.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.296 04:13:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.296 04:13:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:30.296 04:13:31 -- common/autotest_common.sh@10 -- # set +x 00:17:30.296 [2024-11-26 04:13:32.004996] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:30.296 [2024-11-26 04:13:32.005107] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.556 [2024-11-26 04:13:32.137048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.556 [2024-11-26 04:13:32.191205] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:30.556 [2024-11-26 04:13:32.191342] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.556 [2024-11-26 04:13:32.191353] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.556 [2024-11-26 04:13:32.191361] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.556 [2024-11-26 04:13:32.191391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.493 04:13:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:31.493 04:13:32 -- common/autotest_common.sh@862 -- # return 0 00:17:31.493 04:13:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:31.493 04:13:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:31.493 04:13:32 -- common/autotest_common.sh@10 -- # set +x 00:17:31.493 04:13:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.493 04:13:32 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:31.493 04:13:32 -- common/autotest_common.sh@650 -- # local es=0 00:17:31.493 04:13:32 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:31.493 04:13:32 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:17:31.493 04:13:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:31.493 04:13:32 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:17:31.493 04:13:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:31.493 04:13:32 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:31.493 04:13:32 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:31.493 04:13:32 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:31.493 [2024-11-26 04:13:33.233534] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.493 04:13:33 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:31.751 04:13:33 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:32.009 [2024-11-26 04:13:33.697627] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:32.009 [2024-11-26 04:13:33.698066] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.009 04:13:33 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:32.267 malloc0 00:17:32.267 04:13:33 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:32.524 04:13:34 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:32.782 [2024-11-26 04:13:34.396601] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:32.782 [2024-11-26 04:13:34.397131] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:32.782 [2024-11-26 04:13:34.397189] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:32.782 2024/11/26 04:13:34 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:32.782 request: 00:17:32.782 { 00:17:32.782 "method": "nvmf_subsystem_add_host", 00:17:32.782 "params": { 00:17:32.782 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:32.782 "host": "nqn.2016-06.io.spdk:host1", 00:17:32.782 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:32.782 } 00:17:32.782 } 00:17:32.782 Got JSON-RPC error response 00:17:32.782 GoRPCClient: error on JSON-RPC call 00:17:32.782 04:13:34 -- common/autotest_common.sh@653 -- # es=1 00:17:32.782 04:13:34 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:32.782 04:13:34 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:32.782 04:13:34 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:32.782 04:13:34 -- target/tls.sh@189 -- # killprocess 89442 00:17:32.782 04:13:34 -- common/autotest_common.sh@936 -- # '[' -z 89442 ']' 00:17:32.782 04:13:34 -- common/autotest_common.sh@940 -- # kill -0 89442 00:17:32.782 04:13:34 -- common/autotest_common.sh@941 -- # uname 00:17:32.782 04:13:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:32.782 04:13:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89442 00:17:32.782 04:13:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:32.782 04:13:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:32.782 killing process with pid 89442 00:17:32.782 04:13:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89442' 00:17:32.782 04:13:34 -- common/autotest_common.sh@955 -- # kill 89442 00:17:32.782 04:13:34 -- common/autotest_common.sh@960 -- # wait 89442 00:17:33.040 04:13:34 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:33.040 04:13:34 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:33.040 04:13:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:33.040 04:13:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:33.040 04:13:34 -- common/autotest_common.sh@10 -- # set +x 00:17:33.040 04:13:34 -- nvmf/common.sh@469 -- # nvmfpid=89553 00:17:33.040 04:13:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:33.040 04:13:34 -- nvmf/common.sh@470 -- # waitforlisten 89553 00:17:33.040 04:13:34 -- common/autotest_common.sh@829 -- # '[' -z 89553 ']' 00:17:33.040 04:13:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.040 04:13:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:33.040 04:13:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.040 04:13:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:33.040 04:13:34 -- common/autotest_common.sh@10 -- # set +x 00:17:33.040 [2024-11-26 04:13:34.713431] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:33.040 [2024-11-26 04:13:34.713533] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.297 [2024-11-26 04:13:34.853618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.297 [2024-11-26 04:13:34.916593] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:33.297 [2024-11-26 04:13:34.916751] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.297 [2024-11-26 04:13:34.916764] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.297 [2024-11-26 04:13:34.916773] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.297 [2024-11-26 04:13:34.916796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.229 04:13:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:34.229 04:13:35 -- common/autotest_common.sh@862 -- # return 0 00:17:34.229 04:13:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:34.229 04:13:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:34.229 04:13:35 -- common/autotest_common.sh@10 -- # set +x 00:17:34.229 04:13:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.229 04:13:35 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:34.229 04:13:35 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:34.229 04:13:35 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:34.486 [2024-11-26 04:13:36.006831] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.486 04:13:36 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:34.486 04:13:36 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:34.744 [2024-11-26 04:13:36.402881] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:34.744 [2024-11-26 04:13:36.403212] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.744 04:13:36 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:35.001 malloc0 00:17:35.001 04:13:36 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:35.260 04:13:36 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:35.518 04:13:37 -- target/tls.sh@197 -- # bdevperf_pid=89650 00:17:35.518 04:13:37 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:35.518 04:13:37 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:35.518 04:13:37 -- target/tls.sh@200 -- # waitforlisten 89650 /var/tmp/bdevperf.sock 00:17:35.518 04:13:37 -- common/autotest_common.sh@829 -- # '[' -z 89650 ']' 00:17:35.518 04:13:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:35.518 04:13:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:35.518 04:13:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:35.518 04:13:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.518 04:13:37 -- common/autotest_common.sh@10 -- # set +x 00:17:35.518 [2024-11-26 04:13:37.089441] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:35.519 [2024-11-26 04:13:37.089542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89650 ] 00:17:35.519 [2024-11-26 04:13:37.233369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.780 [2024-11-26 04:13:37.312753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.352 04:13:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:36.352 04:13:37 -- common/autotest_common.sh@862 -- # return 0 00:17:36.352 04:13:37 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:36.352 [2024-11-26 04:13:38.112127] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:36.653 TLSTESTn1 00:17:36.653 04:13:38 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:36.948 04:13:38 -- target/tls.sh@205 -- # tgtconf='{ 00:17:36.948 "subsystems": [ 00:17:36.948 { 00:17:36.948 "subsystem": "iobuf", 00:17:36.948 "config": [ 00:17:36.948 { 00:17:36.948 "method": "iobuf_set_options", 00:17:36.948 "params": { 00:17:36.948 "large_bufsize": 135168, 00:17:36.948 "large_pool_count": 1024, 00:17:36.948 "small_bufsize": 8192, 00:17:36.948 "small_pool_count": 8192 00:17:36.948 } 00:17:36.948 } 00:17:36.948 ] 00:17:36.948 }, 00:17:36.948 { 00:17:36.948 "subsystem": "sock", 00:17:36.948 "config": [ 00:17:36.948 { 00:17:36.948 "method": "sock_impl_set_options", 00:17:36.948 "params": { 00:17:36.948 "enable_ktls": false, 00:17:36.948 "enable_placement_id": 0, 00:17:36.948 "enable_quickack": false, 00:17:36.948 "enable_recv_pipe": true, 00:17:36.948 "enable_zerocopy_send_client": false, 00:17:36.948 "enable_zerocopy_send_server": true, 00:17:36.948 "impl_name": "posix", 00:17:36.948 "recv_buf_size": 2097152, 00:17:36.948 "send_buf_size": 2097152, 00:17:36.948 "tls_version": 0, 00:17:36.948 "zerocopy_threshold": 0 00:17:36.948 } 00:17:36.948 }, 00:17:36.948 { 00:17:36.948 "method": "sock_impl_set_options", 00:17:36.949 "params": { 00:17:36.949 "enable_ktls": false, 00:17:36.949 "enable_placement_id": 0, 00:17:36.949 "enable_quickack": false, 00:17:36.949 "enable_recv_pipe": true, 00:17:36.949 "enable_zerocopy_send_client": false, 00:17:36.949 "enable_zerocopy_send_server": true, 00:17:36.949 "impl_name": "ssl", 00:17:36.949 "recv_buf_size": 4096, 00:17:36.949 "send_buf_size": 4096, 00:17:36.949 "tls_version": 0, 00:17:36.949 "zerocopy_threshold": 0 00:17:36.949 } 00:17:36.949 } 00:17:36.949 ] 00:17:36.949 }, 00:17:36.949 { 00:17:36.949 "subsystem": "vmd", 00:17:36.949 "config": [] 00:17:36.949 }, 00:17:36.949 { 00:17:36.949 "subsystem": "accel", 00:17:36.949 "config": [ 00:17:36.949 { 00:17:36.949 "method": "accel_set_options", 00:17:36.949 "params": { 00:17:36.949 "buf_count": 2048, 00:17:36.949 "large_cache_size": 16, 00:17:36.949 "sequence_count": 2048, 00:17:36.949 "small_cache_size": 128, 00:17:36.949 "task_count": 2048 00:17:36.949 } 00:17:36.949 } 00:17:36.949 ] 00:17:36.949 }, 00:17:36.949 { 00:17:36.949 "subsystem": "bdev", 00:17:36.949 "config": [ 00:17:36.949 { 00:17:36.949 "method": "bdev_set_options", 00:17:36.949 "params": { 00:17:36.949 "bdev_auto_examine": true, 00:17:36.949 "bdev_io_cache_size": 256, 00:17:36.949 "bdev_io_pool_size": 65535, 00:17:36.949 "iobuf_large_cache_size": 16, 00:17:36.949 "iobuf_small_cache_size": 128 00:17:36.949 } 00:17:36.949 }, 00:17:36.949 { 00:17:36.949 "method": "bdev_raid_set_options", 00:17:36.949 "params": { 00:17:36.949 "process_window_size_kb": 1024 00:17:36.949 } 00:17:36.949 }, 00:17:36.949 { 00:17:36.949 "method": "bdev_iscsi_set_options", 00:17:36.949 "params": { 00:17:36.949 "timeout_sec": 30 00:17:36.949 } 00:17:36.949 }, 00:17:36.949 { 00:17:36.949 "method": "bdev_nvme_set_options", 00:17:36.949 "params": { 00:17:36.949 "action_on_timeout": "none", 00:17:36.949 "allow_accel_sequence": false, 00:17:36.949 "arbitration_burst": 0, 00:17:36.949 "bdev_retry_count": 3, 00:17:36.949 "ctrlr_loss_timeout_sec": 0, 00:17:36.949 "delay_cmd_submit": true, 00:17:36.949 "fast_io_fail_timeout_sec": 0, 00:17:36.949 "generate_uuids": false, 00:17:36.949 "high_priority_weight": 0, 00:17:36.949 "io_path_stat": false, 00:17:36.949 "io_queue_requests": 0, 00:17:36.949 "keep_alive_timeout_ms": 10000, 00:17:36.949 "low_priority_weight": 0, 00:17:36.949 "medium_priority_weight": 0, 00:17:36.949 "nvme_adminq_poll_period_us": 10000, 00:17:36.949 "nvme_ioq_poll_period_us": 0, 00:17:36.949 "reconnect_delay_sec": 0, 00:17:36.949 "timeout_admin_us": 0, 00:17:36.949 "timeout_us": 0, 00:17:36.949 "transport_ack_timeout": 0, 00:17:36.949 "transport_retry_count": 4, 00:17:36.949 "transport_tos": 0 00:17:36.949 } 00:17:36.949 }, 00:17:36.949 { 00:17:36.949 "method": "bdev_nvme_set_hotplug", 00:17:36.949 "params": { 00:17:36.949 "enable": false, 00:17:36.949 "period_us": 100000 00:17:36.949 } 00:17:36.949 }, 00:17:36.949 { 00:17:36.949 "method": "bdev_malloc_create", 00:17:36.949 "params": { 00:17:36.949 "block_size": 4096, 00:17:36.949 "name": "malloc0", 00:17:36.949 "num_blocks": 8192, 00:17:36.949 "optimal_io_boundary": 0, 00:17:36.949 "physical_block_size": 4096, 00:17:36.949 "uuid": "60f9016f-8109-4491-bc23-267babc52379" 00:17:36.949 } 00:17:36.949 }, 00:17:36.949 { 00:17:36.949 "method": "bdev_wait_for_examine" 00:17:36.949 } 00:17:36.949 ] 00:17:36.949 }, 00:17:36.949 { 00:17:36.949 "subsystem": "nbd", 00:17:36.949 "config": [] 00:17:36.949 }, 00:17:36.949 { 00:17:36.949 "subsystem": "scheduler", 00:17:36.949 "config": [ 00:17:36.949 { 00:17:36.949 "method": "framework_set_scheduler", 00:17:36.949 "params": { 00:17:36.949 "name": "static" 00:17:36.949 } 00:17:36.949 } 00:17:36.949 ] 00:17:36.949 }, 00:17:36.949 { 00:17:36.949 "subsystem": "nvmf", 00:17:36.949 "config": [ 00:17:36.949 { 00:17:36.949 "method": "nvmf_set_config", 00:17:36.949 "params": { 00:17:36.949 "admin_cmd_passthru": { 00:17:36.949 "identify_ctrlr": false 00:17:36.949 }, 00:17:36.949 "discovery_filter": "match_any" 00:17:36.949 } 00:17:36.949 }, 00:17:36.949 { 00:17:36.949 "method": "nvmf_set_max_subsystems", 00:17:36.949 "params": { 00:17:36.949 "max_subsystems": 1024 00:17:36.949 } 00:17:36.949 }, 00:17:36.949 { 00:17:36.949 "method": "nvmf_set_crdt", 00:17:36.949 "params": { 00:17:36.949 "crdt1": 0, 00:17:36.949 "crdt2": 0, 00:17:36.949 "crdt3": 0 00:17:36.949 } 00:17:36.949 }, 00:17:36.949 { 00:17:36.949 "method": "nvmf_create_transport", 00:17:36.949 "params": { 00:17:36.949 "abort_timeout_sec": 1, 00:17:36.949 "buf_cache_size": 4294967295, 00:17:36.949 "c2h_success": false, 00:17:36.949 "dif_insert_or_strip": false, 00:17:36.949 "in_capsule_data_size": 4096, 00:17:36.949 "io_unit_size": 131072, 00:17:36.949 "max_aq_depth": 128, 00:17:36.949 "max_io_qpairs_per_ctrlr": 127, 00:17:36.949 "max_io_size": 131072, 00:17:36.949 "max_queue_depth": 128, 00:17:36.949 "num_shared_buffers": 511, 00:17:36.949 "sock_priority": 0, 00:17:36.949 "trtype": "TCP", 00:17:36.949 "zcopy": false 00:17:36.949 } 00:17:36.949 }, 00:17:36.949 { 00:17:36.949 "method": "nvmf_create_subsystem", 00:17:36.949 "params": { 00:17:36.949 "allow_any_host": false, 00:17:36.949 "ana_reporting": false, 00:17:36.949 "max_cntlid": 65519, 00:17:36.949 "max_namespaces": 10, 00:17:36.949 "min_cntlid": 1, 00:17:36.949 "model_number": "SPDK bdev Controller", 00:17:36.949 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.949 "serial_number": "SPDK00000000000001" 00:17:36.949 } 00:17:36.949 }, 00:17:36.949 { 00:17:36.949 "method": "nvmf_subsystem_add_host", 00:17:36.949 "params": { 00:17:36.949 "host": "nqn.2016-06.io.spdk:host1", 00:17:36.949 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.949 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:36.949 } 00:17:36.949 }, 00:17:36.949 { 00:17:36.949 "method": "nvmf_subsystem_add_ns", 00:17:36.949 "params": { 00:17:36.949 "namespace": { 00:17:36.949 "bdev_name": "malloc0", 00:17:36.949 "nguid": "60F9016F81094491BC23267BABC52379", 00:17:36.949 "nsid": 1, 00:17:36.949 "uuid": "60f9016f-8109-4491-bc23-267babc52379" 00:17:36.949 }, 00:17:36.949 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:36.949 } 00:17:36.949 }, 00:17:36.949 { 00:17:36.949 "method": "nvmf_subsystem_add_listener", 00:17:36.949 "params": { 00:17:36.949 "listen_address": { 00:17:36.949 "adrfam": "IPv4", 00:17:36.949 "traddr": "10.0.0.2", 00:17:36.949 "trsvcid": "4420", 00:17:36.949 "trtype": "TCP" 00:17:36.949 }, 00:17:36.949 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.950 "secure_channel": true 00:17:36.950 } 00:17:36.950 } 00:17:36.950 ] 00:17:36.950 } 00:17:36.950 ] 00:17:36.950 }' 00:17:36.950 04:13:38 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:37.219 04:13:38 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:37.219 "subsystems": [ 00:17:37.219 { 00:17:37.219 "subsystem": "iobuf", 00:17:37.219 "config": [ 00:17:37.219 { 00:17:37.219 "method": "iobuf_set_options", 00:17:37.219 "params": { 00:17:37.219 "large_bufsize": 135168, 00:17:37.219 "large_pool_count": 1024, 00:17:37.219 "small_bufsize": 8192, 00:17:37.219 "small_pool_count": 8192 00:17:37.219 } 00:17:37.219 } 00:17:37.219 ] 00:17:37.219 }, 00:17:37.219 { 00:17:37.219 "subsystem": "sock", 00:17:37.219 "config": [ 00:17:37.219 { 00:17:37.219 "method": "sock_impl_set_options", 00:17:37.219 "params": { 00:17:37.219 "enable_ktls": false, 00:17:37.219 "enable_placement_id": 0, 00:17:37.219 "enable_quickack": false, 00:17:37.219 "enable_recv_pipe": true, 00:17:37.219 "enable_zerocopy_send_client": false, 00:17:37.219 "enable_zerocopy_send_server": true, 00:17:37.219 "impl_name": "posix", 00:17:37.219 "recv_buf_size": 2097152, 00:17:37.219 "send_buf_size": 2097152, 00:17:37.219 "tls_version": 0, 00:17:37.219 "zerocopy_threshold": 0 00:17:37.219 } 00:17:37.219 }, 00:17:37.219 { 00:17:37.219 "method": "sock_impl_set_options", 00:17:37.219 "params": { 00:17:37.219 "enable_ktls": false, 00:17:37.219 "enable_placement_id": 0, 00:17:37.219 "enable_quickack": false, 00:17:37.219 "enable_recv_pipe": true, 00:17:37.219 "enable_zerocopy_send_client": false, 00:17:37.219 "enable_zerocopy_send_server": true, 00:17:37.219 "impl_name": "ssl", 00:17:37.219 "recv_buf_size": 4096, 00:17:37.219 "send_buf_size": 4096, 00:17:37.219 "tls_version": 0, 00:17:37.219 "zerocopy_threshold": 0 00:17:37.219 } 00:17:37.219 } 00:17:37.219 ] 00:17:37.219 }, 00:17:37.219 { 00:17:37.219 "subsystem": "vmd", 00:17:37.219 "config": [] 00:17:37.219 }, 00:17:37.219 { 00:17:37.219 "subsystem": "accel", 00:17:37.219 "config": [ 00:17:37.219 { 00:17:37.219 "method": "accel_set_options", 00:17:37.219 "params": { 00:17:37.219 "buf_count": 2048, 00:17:37.219 "large_cache_size": 16, 00:17:37.219 "sequence_count": 2048, 00:17:37.219 "small_cache_size": 128, 00:17:37.219 "task_count": 2048 00:17:37.219 } 00:17:37.220 } 00:17:37.220 ] 00:17:37.220 }, 00:17:37.220 { 00:17:37.220 "subsystem": "bdev", 00:17:37.220 "config": [ 00:17:37.220 { 00:17:37.220 "method": "bdev_set_options", 00:17:37.220 "params": { 00:17:37.220 "bdev_auto_examine": true, 00:17:37.220 "bdev_io_cache_size": 256, 00:17:37.220 "bdev_io_pool_size": 65535, 00:17:37.220 "iobuf_large_cache_size": 16, 00:17:37.220 "iobuf_small_cache_size": 128 00:17:37.220 } 00:17:37.220 }, 00:17:37.220 { 00:17:37.220 "method": "bdev_raid_set_options", 00:17:37.220 "params": { 00:17:37.220 "process_window_size_kb": 1024 00:17:37.220 } 00:17:37.220 }, 00:17:37.220 { 00:17:37.220 "method": "bdev_iscsi_set_options", 00:17:37.220 "params": { 00:17:37.220 "timeout_sec": 30 00:17:37.220 } 00:17:37.220 }, 00:17:37.220 { 00:17:37.220 "method": "bdev_nvme_set_options", 00:17:37.220 "params": { 00:17:37.220 "action_on_timeout": "none", 00:17:37.220 "allow_accel_sequence": false, 00:17:37.220 "arbitration_burst": 0, 00:17:37.220 "bdev_retry_count": 3, 00:17:37.220 "ctrlr_loss_timeout_sec": 0, 00:17:37.220 "delay_cmd_submit": true, 00:17:37.220 "fast_io_fail_timeout_sec": 0, 00:17:37.220 "generate_uuids": false, 00:17:37.220 "high_priority_weight": 0, 00:17:37.220 "io_path_stat": false, 00:17:37.220 "io_queue_requests": 512, 00:17:37.220 "keep_alive_timeout_ms": 10000, 00:17:37.220 "low_priority_weight": 0, 00:17:37.220 "medium_priority_weight": 0, 00:17:37.220 "nvme_adminq_poll_period_us": 10000, 00:17:37.220 "nvme_ioq_poll_period_us": 0, 00:17:37.220 "reconnect_delay_sec": 0, 00:17:37.220 "timeout_admin_us": 0, 00:17:37.220 "timeout_us": 0, 00:17:37.220 "transport_ack_timeout": 0, 00:17:37.220 "transport_retry_count": 4, 00:17:37.220 "transport_tos": 0 00:17:37.220 } 00:17:37.220 }, 00:17:37.220 { 00:17:37.220 "method": "bdev_nvme_attach_controller", 00:17:37.220 "params": { 00:17:37.220 "adrfam": "IPv4", 00:17:37.220 "ctrlr_loss_timeout_sec": 0, 00:17:37.220 "ddgst": false, 00:17:37.220 "fast_io_fail_timeout_sec": 0, 00:17:37.220 "hdgst": false, 00:17:37.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:37.220 "name": "TLSTEST", 00:17:37.220 "prchk_guard": false, 00:17:37.220 "prchk_reftag": false, 00:17:37.220 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:37.220 "reconnect_delay_sec": 0, 00:17:37.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.220 "traddr": "10.0.0.2", 00:17:37.220 "trsvcid": "4420", 00:17:37.220 "trtype": "TCP" 00:17:37.220 } 00:17:37.220 }, 00:17:37.220 { 00:17:37.220 "method": "bdev_nvme_set_hotplug", 00:17:37.220 "params": { 00:17:37.220 "enable": false, 00:17:37.220 "period_us": 100000 00:17:37.220 } 00:17:37.220 }, 00:17:37.220 { 00:17:37.220 "method": "bdev_wait_for_examine" 00:17:37.220 } 00:17:37.220 ] 00:17:37.220 }, 00:17:37.220 { 00:17:37.220 "subsystem": "nbd", 00:17:37.220 "config": [] 00:17:37.220 } 00:17:37.220 ] 00:17:37.220 }' 00:17:37.220 04:13:38 -- target/tls.sh@208 -- # killprocess 89650 00:17:37.220 04:13:38 -- common/autotest_common.sh@936 -- # '[' -z 89650 ']' 00:17:37.220 04:13:38 -- common/autotest_common.sh@940 -- # kill -0 89650 00:17:37.220 04:13:38 -- common/autotest_common.sh@941 -- # uname 00:17:37.220 04:13:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:37.220 04:13:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89650 00:17:37.220 04:13:38 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:37.220 04:13:38 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:37.220 killing process with pid 89650 00:17:37.220 04:13:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89650' 00:17:37.220 Received shutdown signal, test time was about 10.000000 seconds 00:17:37.220 00:17:37.220 Latency(us) 00:17:37.220 [2024-11-26T04:13:38.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.220 [2024-11-26T04:13:38.988Z] =================================================================================================================== 00:17:37.220 [2024-11-26T04:13:38.988Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:37.220 04:13:38 -- common/autotest_common.sh@955 -- # kill 89650 00:17:37.220 04:13:38 -- common/autotest_common.sh@960 -- # wait 89650 00:17:37.479 04:13:39 -- target/tls.sh@209 -- # killprocess 89553 00:17:37.479 04:13:39 -- common/autotest_common.sh@936 -- # '[' -z 89553 ']' 00:17:37.479 04:13:39 -- common/autotest_common.sh@940 -- # kill -0 89553 00:17:37.479 04:13:39 -- common/autotest_common.sh@941 -- # uname 00:17:37.479 04:13:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:37.479 04:13:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89553 00:17:37.479 killing process with pid 89553 00:17:37.479 04:13:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:37.479 04:13:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:37.479 04:13:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89553' 00:17:37.479 04:13:39 -- common/autotest_common.sh@955 -- # kill 89553 00:17:37.479 04:13:39 -- common/autotest_common.sh@960 -- # wait 89553 00:17:37.737 04:13:39 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:37.737 04:13:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:37.737 04:13:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:37.737 04:13:39 -- common/autotest_common.sh@10 -- # set +x 00:17:37.737 04:13:39 -- target/tls.sh@212 -- # echo '{ 00:17:37.737 "subsystems": [ 00:17:37.737 { 00:17:37.737 "subsystem": "iobuf", 00:17:37.737 "config": [ 00:17:37.737 { 00:17:37.737 "method": "iobuf_set_options", 00:17:37.737 "params": { 00:17:37.737 "large_bufsize": 135168, 00:17:37.737 "large_pool_count": 1024, 00:17:37.737 "small_bufsize": 8192, 00:17:37.737 "small_pool_count": 8192 00:17:37.737 } 00:17:37.737 } 00:17:37.737 ] 00:17:37.737 }, 00:17:37.737 { 00:17:37.737 "subsystem": "sock", 00:17:37.737 "config": [ 00:17:37.737 { 00:17:37.737 "method": "sock_impl_set_options", 00:17:37.737 "params": { 00:17:37.737 "enable_ktls": false, 00:17:37.737 "enable_placement_id": 0, 00:17:37.737 "enable_quickack": false, 00:17:37.737 "enable_recv_pipe": true, 00:17:37.737 "enable_zerocopy_send_client": false, 00:17:37.737 "enable_zerocopy_send_server": true, 00:17:37.737 "impl_name": "posix", 00:17:37.737 "recv_buf_size": 2097152, 00:17:37.737 "send_buf_size": 2097152, 00:17:37.737 "tls_version": 0, 00:17:37.737 "zerocopy_threshold": 0 00:17:37.737 } 00:17:37.737 }, 00:17:37.737 { 00:17:37.737 "method": "sock_impl_set_options", 00:17:37.737 "params": { 00:17:37.737 "enable_ktls": false, 00:17:37.737 "enable_placement_id": 0, 00:17:37.737 "enable_quickack": false, 00:17:37.737 "enable_recv_pipe": true, 00:17:37.737 "enable_zerocopy_send_client": false, 00:17:37.737 "enable_zerocopy_send_server": true, 00:17:37.737 "impl_name": "ssl", 00:17:37.737 "recv_buf_size": 4096, 00:17:37.737 "send_buf_size": 4096, 00:17:37.737 "tls_version": 0, 00:17:37.737 "zerocopy_threshold": 0 00:17:37.737 } 00:17:37.737 } 00:17:37.737 ] 00:17:37.737 }, 00:17:37.737 { 00:17:37.737 "subsystem": "vmd", 00:17:37.737 "config": [] 00:17:37.737 }, 00:17:37.737 { 00:17:37.737 "subsystem": "accel", 00:17:37.737 "config": [ 00:17:37.737 { 00:17:37.737 "method": "accel_set_options", 00:17:37.737 "params": { 00:17:37.737 "buf_count": 2048, 00:17:37.737 "large_cache_size": 16, 00:17:37.737 "sequence_count": 2048, 00:17:37.737 "small_cache_size": 128, 00:17:37.737 "task_count": 2048 00:17:37.737 } 00:17:37.737 } 00:17:37.737 ] 00:17:37.737 }, 00:17:37.737 { 00:17:37.737 "subsystem": "bdev", 00:17:37.737 "config": [ 00:17:37.737 { 00:17:37.737 "method": "bdev_set_options", 00:17:37.737 "params": { 00:17:37.737 "bdev_auto_examine": true, 00:17:37.737 "bdev_io_cache_size": 256, 00:17:37.737 "bdev_io_pool_size": 65535, 00:17:37.737 "iobuf_large_cache_size": 16, 00:17:37.737 "iobuf_small_cache_size": 128 00:17:37.737 } 00:17:37.737 }, 00:17:37.737 { 00:17:37.737 "method": "bdev_raid_set_options", 00:17:37.737 "params": { 00:17:37.737 "process_window_size_kb": 1024 00:17:37.737 } 00:17:37.737 }, 00:17:37.737 { 00:17:37.737 "method": "bdev_iscsi_set_options", 00:17:37.737 "params": { 00:17:37.737 "timeout_sec": 30 00:17:37.737 } 00:17:37.737 }, 00:17:37.737 { 00:17:37.737 "method": "bdev_nvme_set_options", 00:17:37.737 "params": { 00:17:37.737 "action_on_timeout": "none", 00:17:37.737 "allow_accel_sequence": false, 00:17:37.737 "arbitration_burst": 0, 00:17:37.737 "bdev_retry_count": 3, 00:17:37.737 "ctrlr_loss_timeout_sec": 0, 00:17:37.737 "delay_cmd_submit": true, 00:17:37.737 "fast_io_fail_timeout_sec": 0, 00:17:37.737 "generate_uuids": false, 00:17:37.737 "high_priority_weight": 0, 00:17:37.737 "io_path_stat": false, 00:17:37.737 "io_queue_requests": 0, 00:17:37.737 "keep_alive_timeout_ms": 10000, 00:17:37.737 "low_priority_weight": 0, 00:17:37.737 "medium_priority_weight": 0, 00:17:37.737 "nvme_adminq_poll_period_us": 10000, 00:17:37.737 "nvme_ioq_poll_period_us": 0, 00:17:37.737 "reconnect_delay_sec": 0, 00:17:37.737 "timeout_admin_us": 0, 00:17:37.737 "timeout_us": 0, 00:17:37.737 "transport_ack_timeout": 0, 00:17:37.737 "transport_retry_count": 4, 00:17:37.737 "transport_tos": 0 00:17:37.737 } 00:17:37.737 }, 00:17:37.737 { 00:17:37.737 "method": "bdev_nvme_set_hotplug", 00:17:37.737 "params": { 00:17:37.737 "enable": false, 00:17:37.737 "period_us": 100000 00:17:37.737 } 00:17:37.737 }, 00:17:37.737 { 00:17:37.737 "method": "bdev_malloc_create", 00:17:37.737 "params": { 00:17:37.737 "block_size": 4096, 00:17:37.737 "name": "malloc0", 00:17:37.737 "num_blocks": 8192, 00:17:37.737 "optimal_io_boundary": 0, 00:17:37.737 "physical_block_size": 4096, 00:17:37.737 "uuid": "60f9016f-8109-4491-bc23-267babc52379" 00:17:37.737 } 00:17:37.737 }, 00:17:37.737 { 00:17:37.737 "method": "bdev_wait_for_examine" 00:17:37.737 } 00:17:37.737 ] 00:17:37.737 }, 00:17:37.737 { 00:17:37.737 "subsystem": "nbd", 00:17:37.737 "config": [] 00:17:37.737 }, 00:17:37.737 { 00:17:37.737 "subsystem": "scheduler", 00:17:37.737 "config": [ 00:17:37.737 { 00:17:37.737 "method": "framework_set_scheduler", 00:17:37.737 "params": { 00:17:37.737 "name": "static" 00:17:37.737 } 00:17:37.737 } 00:17:37.737 ] 00:17:37.737 }, 00:17:37.737 { 00:17:37.737 "subsystem": "nvmf", 00:17:37.737 "config": [ 00:17:37.737 { 00:17:37.737 "method": "nvmf_set_config", 00:17:37.737 "params": { 00:17:37.737 "admin_cmd_passthru": { 00:17:37.737 "identify_ctrlr": false 00:17:37.737 }, 00:17:37.737 "discovery_filter": "match_any" 00:17:37.737 } 00:17:37.737 }, 00:17:37.737 { 00:17:37.737 "method": "nvmf_set_max_subsystems", 00:17:37.737 "params": { 00:17:37.737 "max_subsystems": 1024 00:17:37.737 } 00:17:37.737 }, 00:17:37.737 { 00:17:37.737 "method": "nvmf_set_crdt", 00:17:37.737 "params": { 00:17:37.737 "crdt1": 0, 00:17:37.737 "crdt2": 0, 00:17:37.737 "crdt3": 0 00:17:37.737 } 00:17:37.737 }, 00:17:37.737 { 00:17:37.737 "method": "nvmf_create_transport", 00:17:37.737 "params": { 00:17:37.737 "abort_timeout_sec": 1, 00:17:37.737 "buf_cache_size": 4294967295, 00:17:37.737 "c2h_success": false, 00:17:37.737 "dif_insert_or_strip": false, 00:17:37.737 "in_capsule_data_size": 4096, 00:17:37.737 "io_unit_size": 131072, 00:17:37.737 "max_aq_depth": 128, 00:17:37.737 "max_io_qpairs_per_ctrlr": 127, 00:17:37.737 "max_io_size": 131072, 00:17:37.737 "max_queue_depth": 128, 00:17:37.737 "num_shared_buffers": 511, 00:17:37.737 "sock_priority": 0, 00:17:37.737 "trtype": "TCP", 00:17:37.737 "zcopy": false 00:17:37.737 } 00:17:37.737 }, 00:17:37.737 { 00:17:37.737 "method": "nvmf_create_subsystem", 00:17:37.737 "params": { 00:17:37.738 "allow_any_host": false, 00:17:37.738 "ana_reporting": false, 00:17:37.738 "max_cntlid": 65519, 00:17:37.738 "max_namespaces": 10, 00:17:37.738 "min_cntlid": 1, 00:17:37.738 "model_number": "SPDK bdev Controller", 00:17:37.738 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.738 "serial_number": "SPDK00000000000001" 00:17:37.738 } 00:17:37.738 }, 00:17:37.738 { 00:17:37.738 "method": "nvmf_subsystem_add_host", 00:17:37.738 "params": { 00:17:37.738 "host": "nqn.2016-06.io.spdk:host1", 00:17:37.738 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.738 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:37.738 } 00:17:37.738 }, 00:17:37.738 { 00:17:37.738 "method": "nvmf_subsystem_add_ns", 00:17:37.738 "params": { 00:17:37.738 "namespace": { 00:17:37.738 "bdev_name": "malloc0", 00:17:37.738 "nguid": "60F9016F81094491BC23267BABC52379", 00:17:37.738 "nsid": 1, 00:17:37.738 "uuid": "60f9016f-8109-4491-bc23-267babc52379" 00:17:37.738 }, 00:17:37.738 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:37.738 } 00:17:37.738 }, 00:17:37.738 { 00:17:37.738 "method": "nvmf_subsystem_add_listener", 00:17:37.738 "params": { 00:17:37.738 "listen_address": { 00:17:37.738 "adrfam": "IPv4", 00:17:37.738 "traddr": "10.0.0.2", 00:17:37.738 "trsvcid": "4420", 00:17:37.738 "trtype": "TCP" 00:17:37.738 }, 00:17:37.738 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.738 "secure_channel": true 00:17:37.738 } 00:17:37.738 } 00:17:37.738 ] 00:17:37.738 } 00:17:37.738 ] 00:17:37.738 }' 00:17:37.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.738 04:13:39 -- nvmf/common.sh@469 -- # nvmfpid=89723 00:17:37.738 04:13:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:37.738 04:13:39 -- nvmf/common.sh@470 -- # waitforlisten 89723 00:17:37.738 04:13:39 -- common/autotest_common.sh@829 -- # '[' -z 89723 ']' 00:17:37.738 04:13:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.738 04:13:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:37.738 04:13:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.738 04:13:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:37.738 04:13:39 -- common/autotest_common.sh@10 -- # set +x 00:17:37.738 [2024-11-26 04:13:39.401980] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:37.738 [2024-11-26 04:13:39.402269] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.997 [2024-11-26 04:13:39.535443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.997 [2024-11-26 04:13:39.589876] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:37.997 [2024-11-26 04:13:39.590283] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.997 [2024-11-26 04:13:39.590397] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.997 [2024-11-26 04:13:39.590495] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.997 [2024-11-26 04:13:39.590598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.256 [2024-11-26 04:13:39.800129] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.256 [2024-11-26 04:13:39.832091] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:38.256 [2024-11-26 04:13:39.832301] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.823 04:13:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.823 04:13:40 -- common/autotest_common.sh@862 -- # return 0 00:17:38.823 04:13:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:38.823 04:13:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:38.823 04:13:40 -- common/autotest_common.sh@10 -- # set +x 00:17:38.823 04:13:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.823 04:13:40 -- target/tls.sh@216 -- # bdevperf_pid=89767 00:17:38.823 04:13:40 -- target/tls.sh@217 -- # waitforlisten 89767 /var/tmp/bdevperf.sock 00:17:38.823 04:13:40 -- common/autotest_common.sh@829 -- # '[' -z 89767 ']' 00:17:38.823 04:13:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:38.823 04:13:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:38.823 04:13:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:38.823 04:13:40 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:38.823 04:13:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.823 04:13:40 -- common/autotest_common.sh@10 -- # set +x 00:17:38.823 04:13:40 -- target/tls.sh@213 -- # echo '{ 00:17:38.823 "subsystems": [ 00:17:38.823 { 00:17:38.823 "subsystem": "iobuf", 00:17:38.823 "config": [ 00:17:38.823 { 00:17:38.823 "method": "iobuf_set_options", 00:17:38.823 "params": { 00:17:38.823 "large_bufsize": 135168, 00:17:38.823 "large_pool_count": 1024, 00:17:38.823 "small_bufsize": 8192, 00:17:38.823 "small_pool_count": 8192 00:17:38.823 } 00:17:38.823 } 00:17:38.823 ] 00:17:38.823 }, 00:17:38.823 { 00:17:38.823 "subsystem": "sock", 00:17:38.823 "config": [ 00:17:38.823 { 00:17:38.823 "method": "sock_impl_set_options", 00:17:38.823 "params": { 00:17:38.823 "enable_ktls": false, 00:17:38.823 "enable_placement_id": 0, 00:17:38.823 "enable_quickack": false, 00:17:38.823 "enable_recv_pipe": true, 00:17:38.823 "enable_zerocopy_send_client": false, 00:17:38.823 "enable_zerocopy_send_server": true, 00:17:38.823 "impl_name": "posix", 00:17:38.823 "recv_buf_size": 2097152, 00:17:38.823 "send_buf_size": 2097152, 00:17:38.823 "tls_version": 0, 00:17:38.823 "zerocopy_threshold": 0 00:17:38.823 } 00:17:38.823 }, 00:17:38.823 { 00:17:38.823 "method": "sock_impl_set_options", 00:17:38.823 "params": { 00:17:38.823 "enable_ktls": false, 00:17:38.823 "enable_placement_id": 0, 00:17:38.823 "enable_quickack": false, 00:17:38.823 "enable_recv_pipe": true, 00:17:38.823 "enable_zerocopy_send_client": false, 00:17:38.823 "enable_zerocopy_send_server": true, 00:17:38.823 "impl_name": "ssl", 00:17:38.823 "recv_buf_size": 4096, 00:17:38.823 "send_buf_size": 4096, 00:17:38.823 "tls_version": 0, 00:17:38.823 "zerocopy_threshold": 0 00:17:38.823 } 00:17:38.823 } 00:17:38.823 ] 00:17:38.823 }, 00:17:38.823 { 00:17:38.823 "subsystem": "vmd", 00:17:38.823 "config": [] 00:17:38.823 }, 00:17:38.823 { 00:17:38.823 "subsystem": "accel", 00:17:38.823 "config": [ 00:17:38.823 { 00:17:38.823 "method": "accel_set_options", 00:17:38.823 "params": { 00:17:38.823 "buf_count": 2048, 00:17:38.823 "large_cache_size": 16, 00:17:38.823 "sequence_count": 2048, 00:17:38.823 "small_cache_size": 128, 00:17:38.823 "task_count": 2048 00:17:38.823 } 00:17:38.823 } 00:17:38.823 ] 00:17:38.823 }, 00:17:38.823 { 00:17:38.823 "subsystem": "bdev", 00:17:38.823 "config": [ 00:17:38.823 { 00:17:38.823 "method": "bdev_set_options", 00:17:38.823 "params": { 00:17:38.823 "bdev_auto_examine": true, 00:17:38.823 "bdev_io_cache_size": 256, 00:17:38.823 "bdev_io_pool_size": 65535, 00:17:38.823 "iobuf_large_cache_size": 16, 00:17:38.823 "iobuf_small_cache_size": 128 00:17:38.823 } 00:17:38.823 }, 00:17:38.823 { 00:17:38.823 "method": "bdev_raid_set_options", 00:17:38.823 "params": { 00:17:38.823 "process_window_size_kb": 1024 00:17:38.823 } 00:17:38.823 }, 00:17:38.823 { 00:17:38.823 "method": "bdev_iscsi_set_options", 00:17:38.823 "params": { 00:17:38.823 "timeout_sec": 30 00:17:38.823 } 00:17:38.823 }, 00:17:38.823 { 00:17:38.823 "method": "bdev_nvme_set_options", 00:17:38.823 "params": { 00:17:38.823 "action_on_timeout": "none", 00:17:38.823 "allow_accel_sequence": false, 00:17:38.823 "arbitration_burst": 0, 00:17:38.823 "bdev_retry_count": 3, 00:17:38.823 "ctrlr_loss_timeout_sec": 0, 00:17:38.823 "delay_cmd_submit": true, 00:17:38.823 "fast_io_fail_timeout_sec": 0, 00:17:38.823 "generate_uuids": false, 00:17:38.823 "high_priority_weight": 0, 00:17:38.823 "io_path_stat": false, 00:17:38.823 "io_queue_requests": 512, 00:17:38.823 "keep_alive_timeout_ms": 10000, 00:17:38.823 "low_priority_weight": 0, 00:17:38.823 "medium_priority_weight": 0, 00:17:38.823 "nvme_adminq_poll_period_us": 10000, 00:17:38.823 "nvme_ioq_poll_period_us": 0, 00:17:38.823 "reconnect_delay_sec": 0, 00:17:38.823 "timeout_admin_us": 0, 00:17:38.823 "timeout_us": 0, 00:17:38.823 "transport_ack_timeout": 0, 00:17:38.823 "transport_retry_count": 4, 00:17:38.823 "transport_tos": 0 00:17:38.823 } 00:17:38.823 }, 00:17:38.823 { 00:17:38.823 "method": "bdev_nvme_attach_controller", 00:17:38.823 "params": { 00:17:38.823 "adrfam": "IPv4", 00:17:38.823 "ctrlr_loss_timeout_sec": 0, 00:17:38.823 "ddgst": false, 00:17:38.823 "fast_io_fail_timeout_sec": 0, 00:17:38.823 "hdgst": false, 00:17:38.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:38.823 "name": "TLSTEST", 00:17:38.823 "prchk_guard": false, 00:17:38.823 "prchk_reftag": false, 00:17:38.823 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:38.823 "reconnect_delay_sec": 0, 00:17:38.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.823 "traddr": "10.0.0.2", 00:17:38.823 "trsvcid": "4420", 00:17:38.823 "trtype": "TCP" 00:17:38.823 } 00:17:38.823 }, 00:17:38.823 { 00:17:38.823 "method": "bdev_nvme_set_hotplug", 00:17:38.823 "params": { 00:17:38.823 "enable": false, 00:17:38.823 "period_us": 100000 00:17:38.823 } 00:17:38.823 }, 00:17:38.823 { 00:17:38.823 "method": "bdev_wait_for_examine" 00:17:38.823 } 00:17:38.823 ] 00:17:38.823 }, 00:17:38.823 { 00:17:38.823 "subsystem": "nbd", 00:17:38.823 "config": [] 00:17:38.823 } 00:17:38.823 ] 00:17:38.823 }' 00:17:38.823 [2024-11-26 04:13:40.479177] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:38.823 [2024-11-26 04:13:40.479274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89767 ] 00:17:39.083 [2024-11-26 04:13:40.619006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.083 [2024-11-26 04:13:40.702316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.342 [2024-11-26 04:13:40.871783] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:39.602 04:13:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.602 04:13:41 -- common/autotest_common.sh@862 -- # return 0 00:17:39.602 04:13:41 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:39.862 Running I/O for 10 seconds... 00:17:49.840 00:17:49.840 Latency(us) 00:17:49.840 [2024-11-26T04:13:51.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.840 [2024-11-26T04:13:51.608Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:49.840 Verification LBA range: start 0x0 length 0x2000 00:17:49.840 TLSTESTn1 : 10.01 6603.90 25.80 0.00 0.00 19354.09 4617.31 19065.02 00:17:49.840 [2024-11-26T04:13:51.608Z] =================================================================================================================== 00:17:49.840 [2024-11-26T04:13:51.608Z] Total : 6603.90 25.80 0.00 0.00 19354.09 4617.31 19065.02 00:17:49.840 0 00:17:49.840 04:13:51 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:49.840 04:13:51 -- target/tls.sh@223 -- # killprocess 89767 00:17:49.840 04:13:51 -- common/autotest_common.sh@936 -- # '[' -z 89767 ']' 00:17:49.840 04:13:51 -- common/autotest_common.sh@940 -- # kill -0 89767 00:17:49.840 04:13:51 -- common/autotest_common.sh@941 -- # uname 00:17:49.840 04:13:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:49.840 04:13:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89767 00:17:49.840 04:13:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:49.840 04:13:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:49.840 killing process with pid 89767 00:17:49.840 04:13:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89767' 00:17:49.840 04:13:51 -- common/autotest_common.sh@955 -- # kill 89767 00:17:49.840 Received shutdown signal, test time was about 10.000000 seconds 00:17:49.840 00:17:49.840 Latency(us) 00:17:49.840 [2024-11-26T04:13:51.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.840 [2024-11-26T04:13:51.608Z] =================================================================================================================== 00:17:49.840 [2024-11-26T04:13:51.608Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:49.840 04:13:51 -- common/autotest_common.sh@960 -- # wait 89767 00:17:50.099 04:13:51 -- target/tls.sh@224 -- # killprocess 89723 00:17:50.099 04:13:51 -- common/autotest_common.sh@936 -- # '[' -z 89723 ']' 00:17:50.099 04:13:51 -- common/autotest_common.sh@940 -- # kill -0 89723 00:17:50.099 04:13:51 -- common/autotest_common.sh@941 -- # uname 00:17:50.099 04:13:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:50.099 04:13:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89723 00:17:50.099 04:13:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:50.099 04:13:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:50.099 killing process with pid 89723 00:17:50.099 04:13:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89723' 00:17:50.099 04:13:51 -- common/autotest_common.sh@955 -- # kill 89723 00:17:50.099 04:13:51 -- common/autotest_common.sh@960 -- # wait 89723 00:17:50.358 04:13:51 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:17:50.358 04:13:51 -- target/tls.sh@227 -- # cleanup 00:17:50.358 04:13:51 -- target/tls.sh@15 -- # process_shm --id 0 00:17:50.358 04:13:51 -- common/autotest_common.sh@806 -- # type=--id 00:17:50.358 04:13:51 -- common/autotest_common.sh@807 -- # id=0 00:17:50.358 04:13:51 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:50.358 04:13:51 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:50.358 04:13:51 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:50.358 04:13:51 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:50.358 04:13:51 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:50.358 04:13:51 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:50.358 nvmf_trace.0 00:17:50.358 04:13:52 -- common/autotest_common.sh@821 -- # return 0 00:17:50.358 04:13:52 -- target/tls.sh@16 -- # killprocess 89767 00:17:50.358 04:13:52 -- common/autotest_common.sh@936 -- # '[' -z 89767 ']' 00:17:50.358 04:13:52 -- common/autotest_common.sh@940 -- # kill -0 89767 00:17:50.358 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89767) - No such process 00:17:50.358 Process with pid 89767 is not found 00:17:50.358 04:13:52 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89767 is not found' 00:17:50.358 04:13:52 -- target/tls.sh@17 -- # nvmftestfini 00:17:50.358 04:13:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:50.358 04:13:52 -- nvmf/common.sh@116 -- # sync 00:17:50.358 04:13:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:50.358 04:13:52 -- nvmf/common.sh@119 -- # set +e 00:17:50.358 04:13:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:50.358 04:13:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:50.358 rmmod nvme_tcp 00:17:50.358 rmmod nvme_fabrics 00:17:50.617 rmmod nvme_keyring 00:17:50.617 04:13:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:50.617 04:13:52 -- nvmf/common.sh@123 -- # set -e 00:17:50.617 04:13:52 -- nvmf/common.sh@124 -- # return 0 00:17:50.617 04:13:52 -- nvmf/common.sh@477 -- # '[' -n 89723 ']' 00:17:50.617 04:13:52 -- nvmf/common.sh@478 -- # killprocess 89723 00:17:50.617 04:13:52 -- common/autotest_common.sh@936 -- # '[' -z 89723 ']' 00:17:50.617 04:13:52 -- common/autotest_common.sh@940 -- # kill -0 89723 00:17:50.617 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89723) - No such process 00:17:50.617 04:13:52 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89723 is not found' 00:17:50.617 Process with pid 89723 is not found 00:17:50.617 04:13:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:50.617 04:13:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:50.617 04:13:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:50.617 04:13:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:50.617 04:13:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:50.617 04:13:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.617 04:13:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.617 04:13:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.617 04:13:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:50.617 04:13:52 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:50.617 00:17:50.617 real 1m11.204s 00:17:50.617 user 1m45.772s 00:17:50.617 sys 0m27.374s 00:17:50.617 04:13:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:50.617 ************************************ 00:17:50.617 04:13:52 -- common/autotest_common.sh@10 -- # set +x 00:17:50.617 END TEST nvmf_tls 00:17:50.617 ************************************ 00:17:50.617 04:13:52 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:50.617 04:13:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:50.617 04:13:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:50.617 04:13:52 -- common/autotest_common.sh@10 -- # set +x 00:17:50.617 ************************************ 00:17:50.617 START TEST nvmf_fips 00:17:50.617 ************************************ 00:17:50.617 04:13:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:50.617 * Looking for test storage... 00:17:50.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:50.617 04:13:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:50.617 04:13:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:50.617 04:13:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:50.617 04:13:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:50.617 04:13:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:50.617 04:13:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:50.617 04:13:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:50.617 04:13:52 -- scripts/common.sh@335 -- # IFS=.-: 00:17:50.617 04:13:52 -- scripts/common.sh@335 -- # read -ra ver1 00:17:50.617 04:13:52 -- scripts/common.sh@336 -- # IFS=.-: 00:17:50.617 04:13:52 -- scripts/common.sh@336 -- # read -ra ver2 00:17:50.617 04:13:52 -- scripts/common.sh@337 -- # local 'op=<' 00:17:50.617 04:13:52 -- scripts/common.sh@339 -- # ver1_l=2 00:17:50.617 04:13:52 -- scripts/common.sh@340 -- # ver2_l=1 00:17:50.617 04:13:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:50.617 04:13:52 -- scripts/common.sh@343 -- # case "$op" in 00:17:50.617 04:13:52 -- scripts/common.sh@344 -- # : 1 00:17:50.617 04:13:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:50.617 04:13:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.617 04:13:52 -- scripts/common.sh@364 -- # decimal 1 00:17:50.876 04:13:52 -- scripts/common.sh@352 -- # local d=1 00:17:50.876 04:13:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:50.876 04:13:52 -- scripts/common.sh@354 -- # echo 1 00:17:50.876 04:13:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:50.876 04:13:52 -- scripts/common.sh@365 -- # decimal 2 00:17:50.876 04:13:52 -- scripts/common.sh@352 -- # local d=2 00:17:50.876 04:13:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:50.876 04:13:52 -- scripts/common.sh@354 -- # echo 2 00:17:50.876 04:13:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:50.876 04:13:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:50.876 04:13:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:50.876 04:13:52 -- scripts/common.sh@367 -- # return 0 00:17:50.876 04:13:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:50.876 04:13:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:50.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.876 --rc genhtml_branch_coverage=1 00:17:50.876 --rc genhtml_function_coverage=1 00:17:50.876 --rc genhtml_legend=1 00:17:50.876 --rc geninfo_all_blocks=1 00:17:50.876 --rc geninfo_unexecuted_blocks=1 00:17:50.876 00:17:50.876 ' 00:17:50.876 04:13:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:50.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.876 --rc genhtml_branch_coverage=1 00:17:50.876 --rc genhtml_function_coverage=1 00:17:50.876 --rc genhtml_legend=1 00:17:50.876 --rc geninfo_all_blocks=1 00:17:50.876 --rc geninfo_unexecuted_blocks=1 00:17:50.876 00:17:50.876 ' 00:17:50.876 04:13:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:50.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.876 --rc genhtml_branch_coverage=1 00:17:50.876 --rc genhtml_function_coverage=1 00:17:50.876 --rc genhtml_legend=1 00:17:50.876 --rc geninfo_all_blocks=1 00:17:50.876 --rc geninfo_unexecuted_blocks=1 00:17:50.876 00:17:50.876 ' 00:17:50.876 04:13:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:50.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.876 --rc genhtml_branch_coverage=1 00:17:50.876 --rc genhtml_function_coverage=1 00:17:50.876 --rc genhtml_legend=1 00:17:50.876 --rc geninfo_all_blocks=1 00:17:50.876 --rc geninfo_unexecuted_blocks=1 00:17:50.876 00:17:50.876 ' 00:17:50.876 04:13:52 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:50.876 04:13:52 -- nvmf/common.sh@7 -- # uname -s 00:17:50.876 04:13:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.876 04:13:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.876 04:13:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.877 04:13:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.877 04:13:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.877 04:13:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.877 04:13:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.877 04:13:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.877 04:13:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.877 04:13:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.877 04:13:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:17:50.877 04:13:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:17:50.877 04:13:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.877 04:13:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.877 04:13:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:50.877 04:13:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:50.877 04:13:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.877 04:13:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.877 04:13:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.877 04:13:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.877 04:13:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.877 04:13:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.877 04:13:52 -- paths/export.sh@5 -- # export PATH 00:17:50.877 04:13:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.877 04:13:52 -- nvmf/common.sh@46 -- # : 0 00:17:50.877 04:13:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:50.877 04:13:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:50.877 04:13:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:50.877 04:13:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.877 04:13:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.877 04:13:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:50.877 04:13:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:50.877 04:13:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:50.877 04:13:52 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:50.877 04:13:52 -- fips/fips.sh@89 -- # check_openssl_version 00:17:50.877 04:13:52 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:50.877 04:13:52 -- fips/fips.sh@85 -- # openssl version 00:17:50.877 04:13:52 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:50.877 04:13:52 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:17:50.877 04:13:52 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:17:50.877 04:13:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:50.877 04:13:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:50.877 04:13:52 -- scripts/common.sh@335 -- # IFS=.-: 00:17:50.877 04:13:52 -- scripts/common.sh@335 -- # read -ra ver1 00:17:50.877 04:13:52 -- scripts/common.sh@336 -- # IFS=.-: 00:17:50.877 04:13:52 -- scripts/common.sh@336 -- # read -ra ver2 00:17:50.877 04:13:52 -- scripts/common.sh@337 -- # local 'op=>=' 00:17:50.877 04:13:52 -- scripts/common.sh@339 -- # ver1_l=3 00:17:50.877 04:13:52 -- scripts/common.sh@340 -- # ver2_l=3 00:17:50.877 04:13:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:50.877 04:13:52 -- scripts/common.sh@343 -- # case "$op" in 00:17:50.877 04:13:52 -- scripts/common.sh@347 -- # : 1 00:17:50.877 04:13:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:50.877 04:13:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.877 04:13:52 -- scripts/common.sh@364 -- # decimal 3 00:17:50.877 04:13:52 -- scripts/common.sh@352 -- # local d=3 00:17:50.877 04:13:52 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:50.877 04:13:52 -- scripts/common.sh@354 -- # echo 3 00:17:50.877 04:13:52 -- scripts/common.sh@364 -- # ver1[v]=3 00:17:50.877 04:13:52 -- scripts/common.sh@365 -- # decimal 3 00:17:50.877 04:13:52 -- scripts/common.sh@352 -- # local d=3 00:17:50.877 04:13:52 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:50.877 04:13:52 -- scripts/common.sh@354 -- # echo 3 00:17:50.877 04:13:52 -- scripts/common.sh@365 -- # ver2[v]=3 00:17:50.877 04:13:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:50.877 04:13:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:50.877 04:13:52 -- scripts/common.sh@363 -- # (( v++ )) 00:17:50.877 04:13:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.877 04:13:52 -- scripts/common.sh@364 -- # decimal 1 00:17:50.877 04:13:52 -- scripts/common.sh@352 -- # local d=1 00:17:50.877 04:13:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:50.877 04:13:52 -- scripts/common.sh@354 -- # echo 1 00:17:50.877 04:13:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:50.877 04:13:52 -- scripts/common.sh@365 -- # decimal 0 00:17:50.877 04:13:52 -- scripts/common.sh@352 -- # local d=0 00:17:50.877 04:13:52 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:50.877 04:13:52 -- scripts/common.sh@354 -- # echo 0 00:17:50.877 04:13:52 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:50.877 04:13:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:50.877 04:13:52 -- scripts/common.sh@366 -- # return 0 00:17:50.877 04:13:52 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:50.877 04:13:52 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:50.877 04:13:52 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:50.877 04:13:52 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:50.877 04:13:52 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:50.877 04:13:52 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:50.877 04:13:52 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:50.877 04:13:52 -- fips/fips.sh@113 -- # build_openssl_config 00:17:50.877 04:13:52 -- fips/fips.sh@37 -- # cat 00:17:50.877 04:13:52 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:50.877 04:13:52 -- fips/fips.sh@58 -- # cat - 00:17:50.877 04:13:52 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:50.877 04:13:52 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:17:50.877 04:13:52 -- fips/fips.sh@116 -- # mapfile -t providers 00:17:50.877 04:13:52 -- fips/fips.sh@116 -- # openssl list -providers 00:17:50.877 04:13:52 -- fips/fips.sh@116 -- # grep name 00:17:50.877 04:13:52 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:17:50.877 04:13:52 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:17:50.877 04:13:52 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:50.877 04:13:52 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:17:50.877 04:13:52 -- fips/fips.sh@127 -- # : 00:17:50.877 04:13:52 -- common/autotest_common.sh@650 -- # local es=0 00:17:50.877 04:13:52 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:50.877 04:13:52 -- common/autotest_common.sh@638 -- # local arg=openssl 00:17:50.877 04:13:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.877 04:13:52 -- common/autotest_common.sh@642 -- # type -t openssl 00:17:50.877 04:13:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.877 04:13:52 -- common/autotest_common.sh@644 -- # type -P openssl 00:17:50.877 04:13:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.877 04:13:52 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:17:50.877 04:13:52 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:17:50.877 04:13:52 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:17:50.877 Error setting digest 00:17:50.877 4022AE7E887F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:17:50.877 4022AE7E887F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:17:50.877 04:13:52 -- common/autotest_common.sh@653 -- # es=1 00:17:50.877 04:13:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:50.877 04:13:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:50.877 04:13:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:50.877 04:13:52 -- fips/fips.sh@130 -- # nvmftestinit 00:17:50.877 04:13:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:50.877 04:13:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.877 04:13:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:50.877 04:13:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:50.877 04:13:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:50.877 04:13:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.877 04:13:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.877 04:13:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.877 04:13:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:50.877 04:13:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:50.877 04:13:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:50.877 04:13:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:50.877 04:13:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:50.877 04:13:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:50.877 04:13:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.877 04:13:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.877 04:13:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:50.877 04:13:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:50.878 04:13:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:50.878 04:13:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:50.878 04:13:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:50.878 04:13:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.878 04:13:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:50.878 04:13:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:50.878 04:13:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:50.878 04:13:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:50.878 04:13:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:50.878 04:13:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:50.878 Cannot find device "nvmf_tgt_br" 00:17:50.878 04:13:52 -- nvmf/common.sh@154 -- # true 00:17:50.878 04:13:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:50.878 Cannot find device "nvmf_tgt_br2" 00:17:50.878 04:13:52 -- nvmf/common.sh@155 -- # true 00:17:50.878 04:13:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:50.878 04:13:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:51.136 Cannot find device "nvmf_tgt_br" 00:17:51.136 04:13:52 -- nvmf/common.sh@157 -- # true 00:17:51.136 04:13:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:51.136 Cannot find device "nvmf_tgt_br2" 00:17:51.136 04:13:52 -- nvmf/common.sh@158 -- # true 00:17:51.136 04:13:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:51.136 04:13:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:51.136 04:13:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:51.136 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:51.136 04:13:52 -- nvmf/common.sh@161 -- # true 00:17:51.136 04:13:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:51.136 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:51.136 04:13:52 -- nvmf/common.sh@162 -- # true 00:17:51.136 04:13:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:51.136 04:13:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:51.136 04:13:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:51.136 04:13:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:51.136 04:13:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:51.136 04:13:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:51.136 04:13:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:51.136 04:13:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:51.136 04:13:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:51.136 04:13:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:51.136 04:13:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:51.136 04:13:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:51.136 04:13:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:51.136 04:13:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:51.136 04:13:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:51.136 04:13:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:51.136 04:13:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:51.136 04:13:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:51.136 04:13:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:51.136 04:13:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:51.136 04:13:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:51.136 04:13:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:51.136 04:13:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:51.136 04:13:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:51.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:17:51.136 00:17:51.136 --- 10.0.0.2 ping statistics --- 00:17:51.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.136 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:51.136 04:13:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:51.136 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:51.136 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:17:51.136 00:17:51.136 --- 10.0.0.3 ping statistics --- 00:17:51.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.136 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:17:51.136 04:13:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:51.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:51.394 00:17:51.394 --- 10.0.0.1 ping statistics --- 00:17:51.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.394 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:51.394 04:13:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.394 04:13:52 -- nvmf/common.sh@421 -- # return 0 00:17:51.394 04:13:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:51.394 04:13:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.394 04:13:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:51.394 04:13:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:51.394 04:13:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.394 04:13:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:51.394 04:13:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:51.394 04:13:52 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:17:51.394 04:13:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:51.394 04:13:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:51.394 04:13:52 -- common/autotest_common.sh@10 -- # set +x 00:17:51.394 04:13:52 -- nvmf/common.sh@469 -- # nvmfpid=90133 00:17:51.394 04:13:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:51.394 04:13:52 -- nvmf/common.sh@470 -- # waitforlisten 90133 00:17:51.394 04:13:52 -- common/autotest_common.sh@829 -- # '[' -z 90133 ']' 00:17:51.394 04:13:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.394 04:13:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.394 04:13:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.394 04:13:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.394 04:13:52 -- common/autotest_common.sh@10 -- # set +x 00:17:51.394 [2024-11-26 04:13:53.012106] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:51.394 [2024-11-26 04:13:53.012207] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.652 [2024-11-26 04:13:53.156769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.652 [2024-11-26 04:13:53.236875] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:51.652 [2024-11-26 04:13:53.237062] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.652 [2024-11-26 04:13:53.237081] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.652 [2024-11-26 04:13:53.237093] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.652 [2024-11-26 04:13:53.237148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.588 04:13:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.588 04:13:54 -- common/autotest_common.sh@862 -- # return 0 00:17:52.588 04:13:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:52.588 04:13:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:52.588 04:13:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.588 04:13:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.588 04:13:54 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:17:52.588 04:13:54 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:52.588 04:13:54 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:52.588 04:13:54 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:52.588 04:13:54 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:52.588 04:13:54 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:52.588 04:13:54 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:52.588 04:13:54 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.588 [2024-11-26 04:13:54.332347] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.588 [2024-11-26 04:13:54.348324] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:52.588 [2024-11-26 04:13:54.348526] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.846 malloc0 00:17:52.846 04:13:54 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:52.846 04:13:54 -- fips/fips.sh@147 -- # bdevperf_pid=90190 00:17:52.846 04:13:54 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:52.846 04:13:54 -- fips/fips.sh@148 -- # waitforlisten 90190 /var/tmp/bdevperf.sock 00:17:52.846 04:13:54 -- common/autotest_common.sh@829 -- # '[' -z 90190 ']' 00:17:52.846 04:13:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:52.846 04:13:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.846 04:13:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:52.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:52.846 04:13:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.846 04:13:54 -- common/autotest_common.sh@10 -- # set +x 00:17:52.846 [2024-11-26 04:13:54.489220] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:52.846 [2024-11-26 04:13:54.489317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90190 ] 00:17:53.105 [2024-11-26 04:13:54.631007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.105 [2024-11-26 04:13:54.696382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.041 04:13:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.041 04:13:55 -- common/autotest_common.sh@862 -- # return 0 00:17:54.041 04:13:55 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:54.041 [2024-11-26 04:13:55.718484] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:54.041 TLSTESTn1 00:17:54.299 04:13:55 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:54.299 Running I/O for 10 seconds... 00:18:04.279 00:18:04.279 Latency(us) 00:18:04.279 [2024-11-26T04:14:06.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.279 [2024-11-26T04:14:06.047Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:04.279 Verification LBA range: start 0x0 length 0x2000 00:18:04.279 TLSTESTn1 : 10.01 6430.91 25.12 0.00 0.00 19874.84 4915.20 24546.21 00:18:04.279 [2024-11-26T04:14:06.047Z] =================================================================================================================== 00:18:04.279 [2024-11-26T04:14:06.047Z] Total : 6430.91 25.12 0.00 0.00 19874.84 4915.20 24546.21 00:18:04.279 0 00:18:04.279 04:14:05 -- fips/fips.sh@1 -- # cleanup 00:18:04.279 04:14:05 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:04.279 04:14:05 -- common/autotest_common.sh@806 -- # type=--id 00:18:04.279 04:14:05 -- common/autotest_common.sh@807 -- # id=0 00:18:04.279 04:14:05 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:04.279 04:14:05 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:04.279 04:14:05 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:04.279 04:14:05 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:04.279 04:14:05 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:04.279 04:14:05 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:04.279 nvmf_trace.0 00:18:04.538 04:14:06 -- common/autotest_common.sh@821 -- # return 0 00:18:04.538 04:14:06 -- fips/fips.sh@16 -- # killprocess 90190 00:18:04.538 04:14:06 -- common/autotest_common.sh@936 -- # '[' -z 90190 ']' 00:18:04.538 04:14:06 -- common/autotest_common.sh@940 -- # kill -0 90190 00:18:04.538 04:14:06 -- common/autotest_common.sh@941 -- # uname 00:18:04.538 04:14:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:04.538 04:14:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90190 00:18:04.538 04:14:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:04.538 04:14:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:04.538 04:14:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90190' 00:18:04.538 killing process with pid 90190 00:18:04.538 04:14:06 -- common/autotest_common.sh@955 -- # kill 90190 00:18:04.538 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.538 00:18:04.538 Latency(us) 00:18:04.538 [2024-11-26T04:14:06.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.538 [2024-11-26T04:14:06.306Z] =================================================================================================================== 00:18:04.538 [2024-11-26T04:14:06.306Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:04.538 04:14:06 -- common/autotest_common.sh@960 -- # wait 90190 00:18:04.538 04:14:06 -- fips/fips.sh@17 -- # nvmftestfini 00:18:04.538 04:14:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:04.538 04:14:06 -- nvmf/common.sh@116 -- # sync 00:18:04.797 04:14:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:04.797 04:14:06 -- nvmf/common.sh@119 -- # set +e 00:18:04.797 04:14:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:04.797 04:14:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:04.797 rmmod nvme_tcp 00:18:04.797 rmmod nvme_fabrics 00:18:04.797 rmmod nvme_keyring 00:18:04.797 04:14:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:04.797 04:14:06 -- nvmf/common.sh@123 -- # set -e 00:18:04.797 04:14:06 -- nvmf/common.sh@124 -- # return 0 00:18:04.797 04:14:06 -- nvmf/common.sh@477 -- # '[' -n 90133 ']' 00:18:04.797 04:14:06 -- nvmf/common.sh@478 -- # killprocess 90133 00:18:04.797 04:14:06 -- common/autotest_common.sh@936 -- # '[' -z 90133 ']' 00:18:04.797 04:14:06 -- common/autotest_common.sh@940 -- # kill -0 90133 00:18:04.797 04:14:06 -- common/autotest_common.sh@941 -- # uname 00:18:04.797 04:14:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:04.797 04:14:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90133 00:18:04.797 04:14:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:04.797 04:14:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:04.797 killing process with pid 90133 00:18:04.797 04:14:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90133' 00:18:04.797 04:14:06 -- common/autotest_common.sh@955 -- # kill 90133 00:18:04.797 04:14:06 -- common/autotest_common.sh@960 -- # wait 90133 00:18:05.056 04:14:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:05.056 04:14:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:05.056 04:14:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:05.056 04:14:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:05.056 04:14:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:05.056 04:14:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.056 04:14:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:05.056 04:14:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.056 04:14:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:05.056 04:14:06 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:05.056 00:18:05.056 real 0m14.478s 00:18:05.056 user 0m18.554s 00:18:05.056 sys 0m6.607s 00:18:05.056 04:14:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:05.056 04:14:06 -- common/autotest_common.sh@10 -- # set +x 00:18:05.056 ************************************ 00:18:05.056 END TEST nvmf_fips 00:18:05.056 ************************************ 00:18:05.056 04:14:06 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:18:05.056 04:14:06 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:05.056 04:14:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:05.056 04:14:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:05.056 04:14:06 -- common/autotest_common.sh@10 -- # set +x 00:18:05.056 ************************************ 00:18:05.056 START TEST nvmf_fuzz 00:18:05.056 ************************************ 00:18:05.056 04:14:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:05.316 * Looking for test storage... 00:18:05.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:05.316 04:14:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:05.316 04:14:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:05.316 04:14:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:05.316 04:14:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:05.316 04:14:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:05.316 04:14:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:05.316 04:14:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:05.316 04:14:06 -- scripts/common.sh@335 -- # IFS=.-: 00:18:05.316 04:14:06 -- scripts/common.sh@335 -- # read -ra ver1 00:18:05.316 04:14:06 -- scripts/common.sh@336 -- # IFS=.-: 00:18:05.316 04:14:06 -- scripts/common.sh@336 -- # read -ra ver2 00:18:05.316 04:14:06 -- scripts/common.sh@337 -- # local 'op=<' 00:18:05.316 04:14:06 -- scripts/common.sh@339 -- # ver1_l=2 00:18:05.316 04:14:06 -- scripts/common.sh@340 -- # ver2_l=1 00:18:05.316 04:14:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:05.316 04:14:06 -- scripts/common.sh@343 -- # case "$op" in 00:18:05.316 04:14:06 -- scripts/common.sh@344 -- # : 1 00:18:05.316 04:14:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:05.316 04:14:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:05.316 04:14:06 -- scripts/common.sh@364 -- # decimal 1 00:18:05.316 04:14:06 -- scripts/common.sh@352 -- # local d=1 00:18:05.316 04:14:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:05.316 04:14:06 -- scripts/common.sh@354 -- # echo 1 00:18:05.316 04:14:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:05.316 04:14:06 -- scripts/common.sh@365 -- # decimal 2 00:18:05.316 04:14:06 -- scripts/common.sh@352 -- # local d=2 00:18:05.316 04:14:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:05.316 04:14:06 -- scripts/common.sh@354 -- # echo 2 00:18:05.316 04:14:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:05.316 04:14:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:05.316 04:14:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:05.316 04:14:06 -- scripts/common.sh@367 -- # return 0 00:18:05.316 04:14:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:05.316 04:14:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:05.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.316 --rc genhtml_branch_coverage=1 00:18:05.316 --rc genhtml_function_coverage=1 00:18:05.316 --rc genhtml_legend=1 00:18:05.316 --rc geninfo_all_blocks=1 00:18:05.316 --rc geninfo_unexecuted_blocks=1 00:18:05.316 00:18:05.316 ' 00:18:05.316 04:14:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:05.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.316 --rc genhtml_branch_coverage=1 00:18:05.316 --rc genhtml_function_coverage=1 00:18:05.316 --rc genhtml_legend=1 00:18:05.316 --rc geninfo_all_blocks=1 00:18:05.316 --rc geninfo_unexecuted_blocks=1 00:18:05.316 00:18:05.316 ' 00:18:05.316 04:14:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:05.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.316 --rc genhtml_branch_coverage=1 00:18:05.316 --rc genhtml_function_coverage=1 00:18:05.316 --rc genhtml_legend=1 00:18:05.316 --rc geninfo_all_blocks=1 00:18:05.316 --rc geninfo_unexecuted_blocks=1 00:18:05.316 00:18:05.316 ' 00:18:05.316 04:14:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:05.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.316 --rc genhtml_branch_coverage=1 00:18:05.316 --rc genhtml_function_coverage=1 00:18:05.316 --rc genhtml_legend=1 00:18:05.316 --rc geninfo_all_blocks=1 00:18:05.316 --rc geninfo_unexecuted_blocks=1 00:18:05.316 00:18:05.316 ' 00:18:05.316 04:14:06 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:05.316 04:14:06 -- nvmf/common.sh@7 -- # uname -s 00:18:05.316 04:14:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.316 04:14:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.316 04:14:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.316 04:14:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.316 04:14:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:05.316 04:14:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:05.316 04:14:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.316 04:14:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:05.316 04:14:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.316 04:14:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:05.316 04:14:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:18:05.316 04:14:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:18:05.316 04:14:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.316 04:14:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:05.316 04:14:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:05.316 04:14:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:05.316 04:14:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.316 04:14:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.316 04:14:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.316 04:14:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.316 04:14:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.316 04:14:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.316 04:14:06 -- paths/export.sh@5 -- # export PATH 00:18:05.316 04:14:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.316 04:14:06 -- nvmf/common.sh@46 -- # : 0 00:18:05.316 04:14:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:05.316 04:14:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:05.316 04:14:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:05.316 04:14:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.316 04:14:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.316 04:14:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:05.316 04:14:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:05.316 04:14:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:05.316 04:14:06 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:05.316 04:14:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:05.317 04:14:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.317 04:14:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:05.317 04:14:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:05.317 04:14:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:05.317 04:14:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.317 04:14:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:05.317 04:14:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.317 04:14:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:05.317 04:14:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:05.317 04:14:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:05.317 04:14:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:05.317 04:14:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:05.317 04:14:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:05.317 04:14:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.317 04:14:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:05.317 04:14:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:05.317 04:14:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:05.317 04:14:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:05.317 04:14:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:05.317 04:14:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:05.317 04:14:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.317 04:14:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:05.317 04:14:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:05.317 04:14:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:05.317 04:14:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:05.317 04:14:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:05.317 04:14:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:05.317 Cannot find device "nvmf_tgt_br" 00:18:05.317 04:14:07 -- nvmf/common.sh@154 -- # true 00:18:05.317 04:14:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:05.317 Cannot find device "nvmf_tgt_br2" 00:18:05.317 04:14:07 -- nvmf/common.sh@155 -- # true 00:18:05.317 04:14:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:05.317 04:14:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:05.317 Cannot find device "nvmf_tgt_br" 00:18:05.317 04:14:07 -- nvmf/common.sh@157 -- # true 00:18:05.317 04:14:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:05.317 Cannot find device "nvmf_tgt_br2" 00:18:05.317 04:14:07 -- nvmf/common.sh@158 -- # true 00:18:05.317 04:14:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:05.576 04:14:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:05.576 04:14:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:05.576 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:05.576 04:14:07 -- nvmf/common.sh@161 -- # true 00:18:05.576 04:14:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:05.576 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:05.576 04:14:07 -- nvmf/common.sh@162 -- # true 00:18:05.576 04:14:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:05.576 04:14:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:05.576 04:14:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:05.576 04:14:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:05.576 04:14:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:05.576 04:14:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:05.576 04:14:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:05.576 04:14:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:05.576 04:14:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:05.576 04:14:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:05.576 04:14:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:05.576 04:14:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:05.576 04:14:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:05.576 04:14:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:05.576 04:14:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:05.576 04:14:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:05.576 04:14:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:05.576 04:14:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:05.576 04:14:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:05.576 04:14:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:05.576 04:14:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:05.576 04:14:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:05.576 04:14:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:05.576 04:14:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:05.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:05.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:18:05.576 00:18:05.576 --- 10.0.0.2 ping statistics --- 00:18:05.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.576 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:18:05.576 04:14:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:05.576 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:05.576 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:18:05.576 00:18:05.576 --- 10.0.0.3 ping statistics --- 00:18:05.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.576 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:18:05.576 04:14:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:05.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:05.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:18:05.576 00:18:05.576 --- 10.0.0.1 ping statistics --- 00:18:05.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.576 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:18:05.576 04:14:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:05.576 04:14:07 -- nvmf/common.sh@421 -- # return 0 00:18:05.576 04:14:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:05.576 04:14:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:05.576 04:14:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:05.576 04:14:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:05.576 04:14:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:05.576 04:14:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:05.576 04:14:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:05.576 04:14:07 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=90547 00:18:05.576 04:14:07 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:05.576 04:14:07 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:05.576 04:14:07 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 90547 00:18:05.576 04:14:07 -- common/autotest_common.sh@829 -- # '[' -z 90547 ']' 00:18:05.576 04:14:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.576 04:14:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:05.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.576 04:14:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.576 04:14:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:05.576 04:14:07 -- common/autotest_common.sh@10 -- # set +x 00:18:06.952 04:14:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:06.952 04:14:08 -- common/autotest_common.sh@862 -- # return 0 00:18:06.952 04:14:08 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:06.952 04:14:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.952 04:14:08 -- common/autotest_common.sh@10 -- # set +x 00:18:06.952 04:14:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.952 04:14:08 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:06.952 04:14:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.952 04:14:08 -- common/autotest_common.sh@10 -- # set +x 00:18:06.952 Malloc0 00:18:06.952 04:14:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.952 04:14:08 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:06.952 04:14:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.952 04:14:08 -- common/autotest_common.sh@10 -- # set +x 00:18:06.952 04:14:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.952 04:14:08 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:06.952 04:14:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.952 04:14:08 -- common/autotest_common.sh@10 -- # set +x 00:18:06.952 04:14:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.952 04:14:08 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:06.952 04:14:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.952 04:14:08 -- common/autotest_common.sh@10 -- # set +x 00:18:06.952 04:14:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.952 04:14:08 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:18:06.952 04:14:08 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:18:07.211 Shutting down the fuzz application 00:18:07.211 04:14:08 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:07.470 Shutting down the fuzz application 00:18:07.470 04:14:09 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:07.470 04:14:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.470 04:14:09 -- common/autotest_common.sh@10 -- # set +x 00:18:07.470 04:14:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.470 04:14:09 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:07.470 04:14:09 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:07.470 04:14:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:07.470 04:14:09 -- nvmf/common.sh@116 -- # sync 00:18:07.470 04:14:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:07.470 04:14:09 -- nvmf/common.sh@119 -- # set +e 00:18:07.470 04:14:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:07.470 04:14:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:07.470 rmmod nvme_tcp 00:18:07.470 rmmod nvme_fabrics 00:18:07.727 rmmod nvme_keyring 00:18:07.727 04:14:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:07.727 04:14:09 -- nvmf/common.sh@123 -- # set -e 00:18:07.727 04:14:09 -- nvmf/common.sh@124 -- # return 0 00:18:07.727 04:14:09 -- nvmf/common.sh@477 -- # '[' -n 90547 ']' 00:18:07.727 04:14:09 -- nvmf/common.sh@478 -- # killprocess 90547 00:18:07.727 04:14:09 -- common/autotest_common.sh@936 -- # '[' -z 90547 ']' 00:18:07.727 04:14:09 -- common/autotest_common.sh@940 -- # kill -0 90547 00:18:07.727 04:14:09 -- common/autotest_common.sh@941 -- # uname 00:18:07.727 04:14:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:07.727 04:14:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90547 00:18:07.727 killing process with pid 90547 00:18:07.727 04:14:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:07.727 04:14:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:07.727 04:14:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90547' 00:18:07.727 04:14:09 -- common/autotest_common.sh@955 -- # kill 90547 00:18:07.727 04:14:09 -- common/autotest_common.sh@960 -- # wait 90547 00:18:07.986 04:14:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:07.986 04:14:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:07.986 04:14:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:07.986 04:14:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:07.986 04:14:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:07.986 04:14:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.986 04:14:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.986 04:14:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.986 04:14:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:07.986 04:14:09 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:07.986 00:18:07.986 real 0m2.800s 00:18:07.986 user 0m2.915s 00:18:07.986 sys 0m0.699s 00:18:07.986 04:14:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:07.986 04:14:09 -- common/autotest_common.sh@10 -- # set +x 00:18:07.986 ************************************ 00:18:07.986 END TEST nvmf_fuzz 00:18:07.986 ************************************ 00:18:07.986 04:14:09 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:07.986 04:14:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:07.986 04:14:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:07.986 04:14:09 -- common/autotest_common.sh@10 -- # set +x 00:18:07.986 ************************************ 00:18:07.986 START TEST nvmf_multiconnection 00:18:07.986 ************************************ 00:18:07.986 04:14:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:07.986 * Looking for test storage... 00:18:07.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:07.986 04:14:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:07.986 04:14:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:07.986 04:14:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:08.246 04:14:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:08.246 04:14:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:08.246 04:14:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:08.246 04:14:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:08.246 04:14:09 -- scripts/common.sh@335 -- # IFS=.-: 00:18:08.246 04:14:09 -- scripts/common.sh@335 -- # read -ra ver1 00:18:08.246 04:14:09 -- scripts/common.sh@336 -- # IFS=.-: 00:18:08.246 04:14:09 -- scripts/common.sh@336 -- # read -ra ver2 00:18:08.246 04:14:09 -- scripts/common.sh@337 -- # local 'op=<' 00:18:08.246 04:14:09 -- scripts/common.sh@339 -- # ver1_l=2 00:18:08.246 04:14:09 -- scripts/common.sh@340 -- # ver2_l=1 00:18:08.246 04:14:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:08.246 04:14:09 -- scripts/common.sh@343 -- # case "$op" in 00:18:08.246 04:14:09 -- scripts/common.sh@344 -- # : 1 00:18:08.246 04:14:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:08.246 04:14:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.246 04:14:09 -- scripts/common.sh@364 -- # decimal 1 00:18:08.246 04:14:09 -- scripts/common.sh@352 -- # local d=1 00:18:08.246 04:14:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:08.246 04:14:09 -- scripts/common.sh@354 -- # echo 1 00:18:08.246 04:14:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:08.246 04:14:09 -- scripts/common.sh@365 -- # decimal 2 00:18:08.246 04:14:09 -- scripts/common.sh@352 -- # local d=2 00:18:08.246 04:14:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:08.246 04:14:09 -- scripts/common.sh@354 -- # echo 2 00:18:08.246 04:14:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:08.246 04:14:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:08.246 04:14:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:08.246 04:14:09 -- scripts/common.sh@367 -- # return 0 00:18:08.246 04:14:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:08.246 04:14:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:08.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.246 --rc genhtml_branch_coverage=1 00:18:08.246 --rc genhtml_function_coverage=1 00:18:08.246 --rc genhtml_legend=1 00:18:08.246 --rc geninfo_all_blocks=1 00:18:08.246 --rc geninfo_unexecuted_blocks=1 00:18:08.246 00:18:08.246 ' 00:18:08.246 04:14:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:08.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.246 --rc genhtml_branch_coverage=1 00:18:08.246 --rc genhtml_function_coverage=1 00:18:08.246 --rc genhtml_legend=1 00:18:08.246 --rc geninfo_all_blocks=1 00:18:08.246 --rc geninfo_unexecuted_blocks=1 00:18:08.246 00:18:08.246 ' 00:18:08.246 04:14:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:08.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.246 --rc genhtml_branch_coverage=1 00:18:08.246 --rc genhtml_function_coverage=1 00:18:08.246 --rc genhtml_legend=1 00:18:08.246 --rc geninfo_all_blocks=1 00:18:08.246 --rc geninfo_unexecuted_blocks=1 00:18:08.246 00:18:08.246 ' 00:18:08.246 04:14:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:08.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.246 --rc genhtml_branch_coverage=1 00:18:08.246 --rc genhtml_function_coverage=1 00:18:08.246 --rc genhtml_legend=1 00:18:08.246 --rc geninfo_all_blocks=1 00:18:08.246 --rc geninfo_unexecuted_blocks=1 00:18:08.246 00:18:08.246 ' 00:18:08.246 04:14:09 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:08.246 04:14:09 -- nvmf/common.sh@7 -- # uname -s 00:18:08.246 04:14:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.246 04:14:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.246 04:14:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.246 04:14:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.246 04:14:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.246 04:14:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.246 04:14:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.246 04:14:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.247 04:14:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.247 04:14:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.247 04:14:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:18:08.247 04:14:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:18:08.247 04:14:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.247 04:14:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.247 04:14:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:08.247 04:14:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:08.247 04:14:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.247 04:14:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.247 04:14:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.247 04:14:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.247 04:14:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.247 04:14:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.247 04:14:09 -- paths/export.sh@5 -- # export PATH 00:18:08.247 04:14:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.247 04:14:09 -- nvmf/common.sh@46 -- # : 0 00:18:08.247 04:14:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:08.247 04:14:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:08.247 04:14:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:08.247 04:14:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.247 04:14:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.247 04:14:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:08.247 04:14:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:08.247 04:14:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:08.247 04:14:09 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:08.247 04:14:09 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:08.247 04:14:09 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:08.247 04:14:09 -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:08.247 04:14:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:08.247 04:14:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.247 04:14:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:08.247 04:14:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:08.247 04:14:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:08.247 04:14:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.247 04:14:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.247 04:14:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.247 04:14:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:08.247 04:14:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:08.247 04:14:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:08.247 04:14:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:08.247 04:14:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:08.247 04:14:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:08.247 04:14:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:08.247 04:14:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:08.247 04:14:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:08.247 04:14:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:08.247 04:14:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:08.247 04:14:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:08.247 04:14:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:08.247 04:14:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:08.247 04:14:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:08.247 04:14:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:08.247 04:14:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:08.247 04:14:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:08.247 04:14:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:08.247 04:14:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:08.247 Cannot find device "nvmf_tgt_br" 00:18:08.247 04:14:09 -- nvmf/common.sh@154 -- # true 00:18:08.247 04:14:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:08.247 Cannot find device "nvmf_tgt_br2" 00:18:08.247 04:14:09 -- nvmf/common.sh@155 -- # true 00:18:08.247 04:14:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:08.247 04:14:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:08.247 Cannot find device "nvmf_tgt_br" 00:18:08.247 04:14:09 -- nvmf/common.sh@157 -- # true 00:18:08.247 04:14:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:08.247 Cannot find device "nvmf_tgt_br2" 00:18:08.247 04:14:09 -- nvmf/common.sh@158 -- # true 00:18:08.247 04:14:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:08.247 04:14:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:08.247 04:14:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:08.247 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:08.247 04:14:09 -- nvmf/common.sh@161 -- # true 00:18:08.247 04:14:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:08.247 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:08.247 04:14:09 -- nvmf/common.sh@162 -- # true 00:18:08.247 04:14:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:08.247 04:14:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:08.247 04:14:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:08.247 04:14:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:08.506 04:14:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:08.506 04:14:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:08.506 04:14:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:08.506 04:14:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:08.506 04:14:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:08.506 04:14:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:08.506 04:14:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:08.506 04:14:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:08.506 04:14:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:08.506 04:14:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:08.506 04:14:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:08.506 04:14:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:08.506 04:14:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:08.506 04:14:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:08.506 04:14:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:08.506 04:14:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:08.506 04:14:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:08.506 04:14:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:08.506 04:14:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:08.506 04:14:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:08.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:08.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:18:08.506 00:18:08.506 --- 10.0.0.2 ping statistics --- 00:18:08.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.506 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:18:08.506 04:14:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:08.506 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:08.506 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:18:08.506 00:18:08.506 --- 10.0.0.3 ping statistics --- 00:18:08.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.506 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:08.506 04:14:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:08.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:08.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:18:08.506 00:18:08.506 --- 10.0.0.1 ping statistics --- 00:18:08.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.506 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:18:08.506 04:14:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:08.506 04:14:10 -- nvmf/common.sh@421 -- # return 0 00:18:08.507 04:14:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:08.507 04:14:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:08.507 04:14:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:08.507 04:14:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:08.507 04:14:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:08.507 04:14:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:08.507 04:14:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:08.507 04:14:10 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:08.507 04:14:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:08.507 04:14:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:08.507 04:14:10 -- common/autotest_common.sh@10 -- # set +x 00:18:08.507 04:14:10 -- nvmf/common.sh@469 -- # nvmfpid=90755 00:18:08.507 04:14:10 -- nvmf/common.sh@470 -- # waitforlisten 90755 00:18:08.507 04:14:10 -- common/autotest_common.sh@829 -- # '[' -z 90755 ']' 00:18:08.507 04:14:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:08.507 04:14:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.507 04:14:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.507 04:14:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.507 04:14:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.507 04:14:10 -- common/autotest_common.sh@10 -- # set +x 00:18:08.765 [2024-11-26 04:14:10.300393] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:08.765 [2024-11-26 04:14:10.300475] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.765 [2024-11-26 04:14:10.442087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:08.765 [2024-11-26 04:14:10.515818] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:08.765 [2024-11-26 04:14:10.516006] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.765 [2024-11-26 04:14:10.516022] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.765 [2024-11-26 04:14:10.516033] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.765 [2024-11-26 04:14:10.516210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.765 [2024-11-26 04:14:10.516737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.765 [2024-11-26 04:14:10.517033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:08.765 [2024-11-26 04:14:10.517046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.702 04:14:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:09.702 04:14:11 -- common/autotest_common.sh@862 -- # return 0 00:18:09.702 04:14:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:09.702 04:14:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:09.702 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.702 04:14:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.702 04:14:11 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:09.702 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.702 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.702 [2024-11-26 04:14:11.381339] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.702 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.702 04:14:11 -- target/multiconnection.sh@21 -- # seq 1 11 00:18:09.702 04:14:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:09.702 04:14:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:09.702 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.702 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.702 Malloc1 00:18:09.702 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.702 04:14:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:09.702 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.702 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.702 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.702 04:14:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:09.702 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.702 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.702 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.702 04:14:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:09.702 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.702 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.702 [2024-11-26 04:14:11.456304] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.702 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.702 04:14:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:09.702 04:14:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:09.702 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.702 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.962 Malloc2 00:18:09.962 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.962 04:14:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:09.962 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.962 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.962 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.962 04:14:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:09.962 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.962 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.962 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.962 04:14:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:09.962 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.962 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.962 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.962 04:14:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:09.962 04:14:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:09.962 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.962 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.962 Malloc3 00:18:09.962 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.962 04:14:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:09.962 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.962 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.962 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.962 04:14:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:09.962 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.962 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.962 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.962 04:14:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:09.962 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.962 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.962 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.962 04:14:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:09.962 04:14:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:09.962 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.962 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.962 Malloc4 00:18:09.962 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.962 04:14:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:09.962 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.962 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.962 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.962 04:14:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:09.962 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.962 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.962 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.962 04:14:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:09.962 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.962 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.962 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.962 04:14:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:09.962 04:14:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:09.962 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.962 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.962 Malloc5 00:18:09.962 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.962 04:14:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:09.962 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.962 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.962 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.962 04:14:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:09.962 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.962 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.962 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.962 04:14:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:18:09.962 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.962 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.962 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.962 04:14:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:09.962 04:14:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:09.962 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.962 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.962 Malloc6 00:18:09.962 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.962 04:14:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:09.962 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.962 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.962 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.962 04:14:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:09.962 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.962 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:09.962 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.962 04:14:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:18:09.962 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.962 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 04:14:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:10.222 04:14:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:10.222 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 Malloc7 00:18:10.222 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 04:14:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:10.222 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 04:14:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:10.222 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 04:14:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:18:10.222 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 04:14:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:10.222 04:14:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:10.222 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 Malloc8 00:18:10.222 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 04:14:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:10.222 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 04:14:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:10.222 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 04:14:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:18:10.222 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 04:14:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:10.222 04:14:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:10.222 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 Malloc9 00:18:10.222 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 04:14:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:10.222 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 04:14:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:10.222 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 04:14:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:18:10.222 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 04:14:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:10.222 04:14:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:10.222 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 Malloc10 00:18:10.222 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 04:14:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:10.222 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 04:14:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:10.222 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 04:14:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:18:10.222 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.223 04:14:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:10.223 04:14:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:10.223 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.223 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:10.482 Malloc11 00:18:10.482 04:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.482 04:14:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:10.482 04:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.482 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:18:10.482 04:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.482 04:14:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:10.482 04:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.482 04:14:12 -- common/autotest_common.sh@10 -- # set +x 00:18:10.482 04:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.482 04:14:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:18:10.482 04:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.482 04:14:12 -- common/autotest_common.sh@10 -- # set +x 00:18:10.482 04:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.482 04:14:12 -- target/multiconnection.sh@28 -- # seq 1 11 00:18:10.482 04:14:12 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:10.482 04:14:12 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:10.482 04:14:12 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:10.482 04:14:12 -- common/autotest_common.sh@1187 -- # local i=0 00:18:10.482 04:14:12 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:10.482 04:14:12 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:10.482 04:14:12 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:13.016 04:14:14 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:13.016 04:14:14 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:13.016 04:14:14 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:18:13.016 04:14:14 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:13.016 04:14:14 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:13.016 04:14:14 -- common/autotest_common.sh@1197 -- # return 0 00:18:13.016 04:14:14 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:13.016 04:14:14 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:13.016 04:14:14 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:13.016 04:14:14 -- common/autotest_common.sh@1187 -- # local i=0 00:18:13.016 04:14:14 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:13.016 04:14:14 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:13.016 04:14:14 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:14.921 04:14:16 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:14.921 04:14:16 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:14.921 04:14:16 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:18:14.921 04:14:16 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:14.921 04:14:16 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:14.921 04:14:16 -- common/autotest_common.sh@1197 -- # return 0 00:18:14.921 04:14:16 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:14.922 04:14:16 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:14.922 04:14:16 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:14.922 04:14:16 -- common/autotest_common.sh@1187 -- # local i=0 00:18:14.922 04:14:16 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:14.922 04:14:16 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:14.922 04:14:16 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:16.856 04:14:18 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:16.856 04:14:18 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:16.856 04:14:18 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:18:17.115 04:14:18 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:17.115 04:14:18 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:17.115 04:14:18 -- common/autotest_common.sh@1197 -- # return 0 00:18:17.115 04:14:18 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.115 04:14:18 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:17.115 04:14:18 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:17.115 04:14:18 -- common/autotest_common.sh@1187 -- # local i=0 00:18:17.115 04:14:18 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:17.115 04:14:18 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:17.115 04:14:18 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:19.648 04:14:20 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:19.648 04:14:20 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:19.648 04:14:20 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:18:19.648 04:14:20 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:19.648 04:14:20 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:19.648 04:14:20 -- common/autotest_common.sh@1197 -- # return 0 00:18:19.648 04:14:20 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:19.648 04:14:20 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:19.648 04:14:20 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:19.648 04:14:20 -- common/autotest_common.sh@1187 -- # local i=0 00:18:19.648 04:14:20 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:19.648 04:14:20 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:19.648 04:14:20 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:21.552 04:14:23 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:21.552 04:14:23 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:21.552 04:14:23 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:18:21.552 04:14:23 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:21.552 04:14:23 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:21.552 04:14:23 -- common/autotest_common.sh@1197 -- # return 0 00:18:21.552 04:14:23 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:21.552 04:14:23 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:21.552 04:14:23 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:21.552 04:14:23 -- common/autotest_common.sh@1187 -- # local i=0 00:18:21.552 04:14:23 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.552 04:14:23 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:21.552 04:14:23 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:23.456 04:14:25 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:23.456 04:14:25 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:23.456 04:14:25 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:18:23.714 04:14:25 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:23.714 04:14:25 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:23.714 04:14:25 -- common/autotest_common.sh@1197 -- # return 0 00:18:23.714 04:14:25 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:23.714 04:14:25 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:23.714 04:14:25 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:23.715 04:14:25 -- common/autotest_common.sh@1187 -- # local i=0 00:18:23.715 04:14:25 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:23.715 04:14:25 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:23.715 04:14:25 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:26.247 04:14:27 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:26.247 04:14:27 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:26.247 04:14:27 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:18:26.247 04:14:27 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:26.247 04:14:27 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:26.247 04:14:27 -- common/autotest_common.sh@1197 -- # return 0 00:18:26.247 04:14:27 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:26.247 04:14:27 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:26.247 04:14:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:26.247 04:14:27 -- common/autotest_common.sh@1187 -- # local i=0 00:18:26.247 04:14:27 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:26.247 04:14:27 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:26.247 04:14:27 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:28.147 04:14:29 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:28.147 04:14:29 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:28.147 04:14:29 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:18:28.147 04:14:29 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:28.147 04:14:29 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:28.147 04:14:29 -- common/autotest_common.sh@1197 -- # return 0 00:18:28.147 04:14:29 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:28.147 04:14:29 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:28.147 04:14:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:28.147 04:14:29 -- common/autotest_common.sh@1187 -- # local i=0 00:18:28.147 04:14:29 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:28.147 04:14:29 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:28.147 04:14:29 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:30.680 04:14:31 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:30.680 04:14:31 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:30.680 04:14:31 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:18:30.680 04:14:31 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:30.680 04:14:31 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:30.680 04:14:31 -- common/autotest_common.sh@1197 -- # return 0 00:18:30.680 04:14:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:30.680 04:14:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:30.680 04:14:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:30.680 04:14:32 -- common/autotest_common.sh@1187 -- # local i=0 00:18:30.680 04:14:32 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:30.680 04:14:32 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:30.680 04:14:32 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:32.583 04:14:34 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:32.583 04:14:34 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:32.583 04:14:34 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:18:32.583 04:14:34 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:32.583 04:14:34 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:32.583 04:14:34 -- common/autotest_common.sh@1197 -- # return 0 00:18:32.583 04:14:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:32.583 04:14:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:32.583 04:14:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:32.583 04:14:34 -- common/autotest_common.sh@1187 -- # local i=0 00:18:32.583 04:14:34 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:32.583 04:14:34 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:32.583 04:14:34 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:35.116 04:14:36 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:35.116 04:14:36 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:35.116 04:14:36 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:18:35.116 04:14:36 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:35.116 04:14:36 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:35.116 04:14:36 -- common/autotest_common.sh@1197 -- # return 0 00:18:35.116 04:14:36 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:35.116 [global] 00:18:35.116 thread=1 00:18:35.116 invalidate=1 00:18:35.116 rw=read 00:18:35.116 time_based=1 00:18:35.116 runtime=10 00:18:35.116 ioengine=libaio 00:18:35.116 direct=1 00:18:35.116 bs=262144 00:18:35.116 iodepth=64 00:18:35.116 norandommap=1 00:18:35.116 numjobs=1 00:18:35.116 00:18:35.116 [job0] 00:18:35.116 filename=/dev/nvme0n1 00:18:35.116 [job1] 00:18:35.116 filename=/dev/nvme10n1 00:18:35.116 [job2] 00:18:35.116 filename=/dev/nvme1n1 00:18:35.116 [job3] 00:18:35.116 filename=/dev/nvme2n1 00:18:35.116 [job4] 00:18:35.116 filename=/dev/nvme3n1 00:18:35.116 [job5] 00:18:35.116 filename=/dev/nvme4n1 00:18:35.116 [job6] 00:18:35.116 filename=/dev/nvme5n1 00:18:35.116 [job7] 00:18:35.116 filename=/dev/nvme6n1 00:18:35.116 [job8] 00:18:35.116 filename=/dev/nvme7n1 00:18:35.116 [job9] 00:18:35.116 filename=/dev/nvme8n1 00:18:35.116 [job10] 00:18:35.116 filename=/dev/nvme9n1 00:18:35.116 Could not set queue depth (nvme0n1) 00:18:35.116 Could not set queue depth (nvme10n1) 00:18:35.116 Could not set queue depth (nvme1n1) 00:18:35.116 Could not set queue depth (nvme2n1) 00:18:35.116 Could not set queue depth (nvme3n1) 00:18:35.116 Could not set queue depth (nvme4n1) 00:18:35.116 Could not set queue depth (nvme5n1) 00:18:35.116 Could not set queue depth (nvme6n1) 00:18:35.116 Could not set queue depth (nvme7n1) 00:18:35.116 Could not set queue depth (nvme8n1) 00:18:35.116 Could not set queue depth (nvme9n1) 00:18:35.116 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.116 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.117 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.117 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.117 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.117 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.117 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.117 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.117 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.117 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.117 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.117 fio-3.35 00:18:35.117 Starting 11 threads 00:18:47.330 00:18:47.330 job0: (groupid=0, jobs=1): err= 0: pid=91238: Tue Nov 26 04:14:47 2024 00:18:47.330 read: IOPS=880, BW=220MiB/s (231MB/s)(2221MiB/10084msec) 00:18:47.330 slat (usec): min=20, max=106381, avg=1085.94, stdev=4806.15 00:18:47.330 clat (msec): min=12, max=194, avg=71.41, stdev=40.77 00:18:47.330 lat (msec): min=15, max=252, avg=72.49, stdev=41.54 00:18:47.330 clat percentiles (msec): 00:18:47.330 | 1.00th=[ 21], 5.00th=[ 24], 10.00th=[ 27], 20.00th=[ 31], 00:18:47.330 | 30.00th=[ 36], 40.00th=[ 42], 50.00th=[ 79], 60.00th=[ 91], 00:18:47.330 | 70.00th=[ 97], 80.00th=[ 107], 90.00th=[ 126], 95.00th=[ 144], 00:18:47.330 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 180], 99.95th=[ 180], 00:18:47.330 | 99.99th=[ 194] 00:18:47.330 bw ( KiB/s): min=90805, max=530468, per=15.18%, avg=225854.70, stdev=140290.78, samples=20 00:18:47.330 iops : min= 354, max= 2072, avg=882.10, stdev=548.00, samples=20 00:18:47.330 lat (msec) : 20=0.91%, 50=46.30%, 100=27.12%, 250=25.68% 00:18:47.330 cpu : usr=0.30%, sys=2.78%, ctx=1660, majf=0, minf=4097 00:18:47.330 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:47.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:47.330 issued rwts: total=8884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.330 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:47.330 job1: (groupid=0, jobs=1): err= 0: pid=91239: Tue Nov 26 04:14:47 2024 00:18:47.330 read: IOPS=477, BW=119MiB/s (125MB/s)(1202MiB/10058msec) 00:18:47.330 slat (usec): min=20, max=102097, avg=2027.20, stdev=6959.54 00:18:47.330 clat (msec): min=50, max=320, avg=131.64, stdev=28.04 00:18:47.330 lat (msec): min=72, max=320, avg=133.67, stdev=28.80 00:18:47.330 clat percentiles (msec): 00:18:47.330 | 1.00th=[ 80], 5.00th=[ 90], 10.00th=[ 102], 20.00th=[ 111], 00:18:47.330 | 30.00th=[ 118], 40.00th=[ 125], 50.00th=[ 130], 60.00th=[ 134], 00:18:47.330 | 70.00th=[ 140], 80.00th=[ 150], 90.00th=[ 163], 95.00th=[ 178], 00:18:47.330 | 99.00th=[ 207], 99.50th=[ 288], 99.90th=[ 321], 99.95th=[ 321], 00:18:47.330 | 99.99th=[ 321] 00:18:47.330 bw ( KiB/s): min=56945, max=165888, per=8.16%, avg=121409.50, stdev=23172.72, samples=20 00:18:47.330 iops : min= 222, max= 648, avg=474.05, stdev=90.55, samples=20 00:18:47.330 lat (msec) : 100=9.18%, 250=90.05%, 500=0.77% 00:18:47.330 cpu : usr=0.15%, sys=1.76%, ctx=1056, majf=0, minf=4097 00:18:47.330 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:47.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:47.330 issued rwts: total=4806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.330 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:47.330 job2: (groupid=0, jobs=1): err= 0: pid=91240: Tue Nov 26 04:14:47 2024 00:18:47.330 read: IOPS=419, BW=105MiB/s (110MB/s)(1062MiB/10133msec) 00:18:47.330 slat (usec): min=22, max=108112, avg=2321.14, stdev=8568.05 00:18:47.330 clat (msec): min=15, max=260, avg=150.11, stdev=27.01 00:18:47.330 lat (msec): min=16, max=282, avg=152.43, stdev=28.26 00:18:47.330 clat percentiles (msec): 00:18:47.330 | 1.00th=[ 70], 5.00th=[ 117], 10.00th=[ 125], 20.00th=[ 131], 00:18:47.330 | 30.00th=[ 136], 40.00th=[ 140], 50.00th=[ 144], 60.00th=[ 155], 00:18:47.330 | 70.00th=[ 163], 80.00th=[ 174], 90.00th=[ 188], 95.00th=[ 199], 00:18:47.330 | 99.00th=[ 220], 99.50th=[ 226], 99.90th=[ 251], 99.95th=[ 251], 00:18:47.330 | 99.99th=[ 262] 00:18:47.330 bw ( KiB/s): min=88399, max=126464, per=7.20%, avg=107062.25, stdev=13553.39, samples=20 00:18:47.330 iops : min= 345, max= 494, avg=418.05, stdev=53.07, samples=20 00:18:47.330 lat (msec) : 20=0.07%, 50=0.21%, 100=1.51%, 250=98.09%, 500=0.12% 00:18:47.330 cpu : usr=0.17%, sys=1.69%, ctx=893, majf=0, minf=4097 00:18:47.330 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:18:47.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:47.330 issued rwts: total=4247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.330 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:47.330 job3: (groupid=0, jobs=1): err= 0: pid=91241: Tue Nov 26 04:14:47 2024 00:18:47.330 read: IOPS=589, BW=147MiB/s (155MB/s)(1487MiB/10086msec) 00:18:47.330 slat (usec): min=21, max=89742, avg=1617.15, stdev=6472.85 00:18:47.330 clat (msec): min=23, max=219, avg=106.66, stdev=35.01 00:18:47.330 lat (msec): min=24, max=259, avg=108.28, stdev=35.98 00:18:47.330 clat percentiles (msec): 00:18:47.330 | 1.00th=[ 44], 5.00th=[ 55], 10.00th=[ 59], 20.00th=[ 67], 00:18:47.330 | 30.00th=[ 80], 40.00th=[ 102], 50.00th=[ 114], 60.00th=[ 122], 00:18:47.330 | 70.00th=[ 127], 80.00th=[ 136], 90.00th=[ 153], 95.00th=[ 161], 00:18:47.330 | 99.00th=[ 182], 99.50th=[ 199], 99.90th=[ 220], 99.95th=[ 220], 00:18:47.330 | 99.99th=[ 220] 00:18:47.330 bw ( KiB/s): min=95935, max=257021, per=10.13%, avg=150621.30, stdev=49756.92, samples=20 00:18:47.330 iops : min= 374, max= 1003, avg=588.20, stdev=194.22, samples=20 00:18:47.330 lat (msec) : 50=2.57%, 100=35.83%, 250=61.60% 00:18:47.330 cpu : usr=0.15%, sys=2.22%, ctx=1245, majf=0, minf=4097 00:18:47.330 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:18:47.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:47.330 issued rwts: total=5948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.330 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:47.330 job4: (groupid=0, jobs=1): err= 0: pid=91242: Tue Nov 26 04:14:47 2024 00:18:47.330 read: IOPS=637, BW=159MiB/s (167MB/s)(1606MiB/10078msec) 00:18:47.330 slat (usec): min=16, max=102217, avg=1519.09, stdev=6088.13 00:18:47.330 clat (msec): min=2, max=250, avg=98.70, stdev=42.46 00:18:47.330 lat (msec): min=2, max=250, avg=100.22, stdev=43.42 00:18:47.330 clat percentiles (msec): 00:18:47.330 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 39], 20.00th=[ 63], 00:18:47.330 | 30.00th=[ 72], 40.00th=[ 87], 50.00th=[ 107], 60.00th=[ 120], 00:18:47.330 | 70.00th=[ 127], 80.00th=[ 136], 90.00th=[ 150], 95.00th=[ 161], 00:18:47.331 | 99.00th=[ 182], 99.50th=[ 190], 99.90th=[ 194], 99.95th=[ 201], 00:18:47.331 | 99.99th=[ 251] 00:18:47.331 bw ( KiB/s): min=65916, max=320383, per=10.94%, avg=162796.20, stdev=64798.64, samples=20 00:18:47.331 iops : min= 257, max= 1251, avg=635.75, stdev=252.98, samples=20 00:18:47.331 lat (msec) : 4=0.19%, 10=2.04%, 20=3.13%, 50=7.30%, 100=33.34% 00:18:47.331 lat (msec) : 250=53.99%, 500=0.02% 00:18:47.331 cpu : usr=0.22%, sys=2.02%, ctx=1155, majf=0, minf=4097 00:18:47.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:18:47.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:47.331 issued rwts: total=6425,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.331 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:47.331 job5: (groupid=0, jobs=1): err= 0: pid=91243: Tue Nov 26 04:14:47 2024 00:18:47.331 read: IOPS=558, BW=140MiB/s (146MB/s)(1414MiB/10131msec) 00:18:47.331 slat (usec): min=17, max=140083, avg=1726.49, stdev=7931.18 00:18:47.331 clat (usec): min=1807, max=292775, avg=112643.73, stdev=46217.90 00:18:47.331 lat (usec): min=1850, max=300197, avg=114370.22, stdev=47405.17 00:18:47.331 clat percentiles (msec): 00:18:47.331 | 1.00th=[ 7], 5.00th=[ 22], 10.00th=[ 67], 20.00th=[ 84], 00:18:47.331 | 30.00th=[ 92], 40.00th=[ 99], 50.00th=[ 105], 60.00th=[ 117], 00:18:47.331 | 70.00th=[ 136], 80.00th=[ 159], 90.00th=[ 174], 95.00th=[ 184], 00:18:47.331 | 99.00th=[ 213], 99.50th=[ 245], 99.90th=[ 275], 99.95th=[ 288], 00:18:47.331 | 99.99th=[ 292] 00:18:47.331 bw ( KiB/s): min=73875, max=294912, per=9.62%, avg=143160.20, stdev=54536.49, samples=20 00:18:47.331 iops : min= 288, max= 1152, avg=559.00, stdev=213.17, samples=20 00:18:47.331 lat (msec) : 2=0.09%, 4=0.57%, 10=1.26%, 20=2.93%, 50=3.92% 00:18:47.331 lat (msec) : 100=34.81%, 250=56.21%, 500=0.21% 00:18:47.331 cpu : usr=0.19%, sys=1.86%, ctx=1207, majf=0, minf=4097 00:18:47.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:47.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:47.331 issued rwts: total=5657,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.331 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:47.331 job6: (groupid=0, jobs=1): err= 0: pid=91244: Tue Nov 26 04:14:47 2024 00:18:47.331 read: IOPS=409, BW=102MiB/s (107MB/s)(1038MiB/10133msec) 00:18:47.331 slat (usec): min=21, max=102951, avg=2385.49, stdev=8506.05 00:18:47.331 clat (msec): min=8, max=276, avg=153.52, stdev=29.49 00:18:47.331 lat (msec): min=9, max=280, avg=155.90, stdev=30.77 00:18:47.331 clat percentiles (msec): 00:18:47.331 | 1.00th=[ 31], 5.00th=[ 117], 10.00th=[ 126], 20.00th=[ 133], 00:18:47.331 | 30.00th=[ 140], 40.00th=[ 144], 50.00th=[ 150], 60.00th=[ 159], 00:18:47.331 | 70.00th=[ 167], 80.00th=[ 178], 90.00th=[ 190], 95.00th=[ 201], 00:18:47.331 | 99.00th=[ 220], 99.50th=[ 259], 99.90th=[ 275], 99.95th=[ 275], 00:18:47.331 | 99.99th=[ 279] 00:18:47.331 bw ( KiB/s): min=83456, max=123392, per=7.03%, avg=104606.60, stdev=13605.74, samples=20 00:18:47.331 iops : min= 326, max= 482, avg=408.45, stdev=53.29, samples=20 00:18:47.331 lat (msec) : 10=0.07%, 20=0.67%, 50=0.43%, 250=98.24%, 500=0.58% 00:18:47.331 cpu : usr=0.19%, sys=1.57%, ctx=858, majf=0, minf=4097 00:18:47.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:18:47.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:47.331 issued rwts: total=4151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.331 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:47.331 job7: (groupid=0, jobs=1): err= 0: pid=91245: Tue Nov 26 04:14:47 2024 00:18:47.331 read: IOPS=458, BW=115MiB/s (120MB/s)(1155MiB/10064msec) 00:18:47.331 slat (usec): min=20, max=116459, avg=2131.62, stdev=7361.78 00:18:47.331 clat (msec): min=12, max=252, avg=137.16, stdev=27.51 00:18:47.331 lat (msec): min=12, max=301, avg=139.29, stdev=28.50 00:18:47.331 clat percentiles (msec): 00:18:47.331 | 1.00th=[ 80], 5.00th=[ 91], 10.00th=[ 101], 20.00th=[ 114], 00:18:47.331 | 30.00th=[ 125], 40.00th=[ 132], 50.00th=[ 138], 60.00th=[ 142], 00:18:47.331 | 70.00th=[ 150], 80.00th=[ 159], 90.00th=[ 174], 95.00th=[ 182], 00:18:47.331 | 99.00th=[ 205], 99.50th=[ 209], 99.90th=[ 239], 99.95th=[ 251], 00:18:47.331 | 99.99th=[ 253] 00:18:47.331 bw ( KiB/s): min=83456, max=171863, per=7.84%, avg=116637.60, stdev=21628.12, samples=20 00:18:47.331 iops : min= 326, max= 671, avg=455.50, stdev=84.48, samples=20 00:18:47.331 lat (msec) : 20=0.13%, 50=0.02%, 100=9.94%, 250=89.85%, 500=0.06% 00:18:47.331 cpu : usr=0.24%, sys=1.39%, ctx=846, majf=0, minf=4098 00:18:47.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:18:47.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:47.331 issued rwts: total=4619,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.331 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:47.331 job8: (groupid=0, jobs=1): err= 0: pid=91246: Tue Nov 26 04:14:47 2024 00:18:47.331 read: IOPS=478, BW=120MiB/s (126MB/s)(1204MiB/10059msec) 00:18:47.331 slat (usec): min=20, max=110814, avg=2048.66, stdev=7188.40 00:18:47.331 clat (msec): min=54, max=237, avg=131.34, stdev=25.83 00:18:47.331 lat (msec): min=73, max=264, avg=133.39, stdev=26.87 00:18:47.331 clat percentiles (msec): 00:18:47.331 | 1.00th=[ 78], 5.00th=[ 90], 10.00th=[ 99], 20.00th=[ 110], 00:18:47.331 | 30.00th=[ 118], 40.00th=[ 123], 50.00th=[ 130], 60.00th=[ 136], 00:18:47.331 | 70.00th=[ 144], 80.00th=[ 155], 90.00th=[ 169], 95.00th=[ 176], 00:18:47.331 | 99.00th=[ 192], 99.50th=[ 203], 99.90th=[ 218], 99.95th=[ 228], 00:18:47.331 | 99.99th=[ 239] 00:18:47.331 bw ( KiB/s): min=83110, max=169133, per=8.18%, avg=121652.75, stdev=23646.33, samples=20 00:18:47.331 iops : min= 324, max= 660, avg=475.05, stdev=92.39, samples=20 00:18:47.331 lat (msec) : 100=11.34%, 250=88.66% 00:18:47.331 cpu : usr=0.13%, sys=1.70%, ctx=979, majf=0, minf=4097 00:18:47.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:47.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:47.331 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.331 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:47.331 job9: (groupid=0, jobs=1): err= 0: pid=91247: Tue Nov 26 04:14:47 2024 00:18:47.331 read: IOPS=406, BW=102MiB/s (107MB/s)(1029MiB/10124msec) 00:18:47.331 slat (usec): min=21, max=105439, avg=2428.84, stdev=8764.88 00:18:47.331 clat (msec): min=102, max=280, avg=154.64, stdev=22.37 00:18:47.331 lat (msec): min=102, max=280, avg=157.07, stdev=24.07 00:18:47.331 clat percentiles (msec): 00:18:47.331 | 1.00th=[ 118], 5.00th=[ 124], 10.00th=[ 129], 20.00th=[ 136], 00:18:47.331 | 30.00th=[ 142], 40.00th=[ 146], 50.00th=[ 150], 60.00th=[ 157], 00:18:47.331 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 192], 00:18:47.331 | 99.00th=[ 211], 99.50th=[ 232], 99.90th=[ 262], 99.95th=[ 279], 00:18:47.331 | 99.99th=[ 279] 00:18:47.331 bw ( KiB/s): min=86866, max=122368, per=6.97%, avg=103728.25, stdev=12045.79, samples=20 00:18:47.331 iops : min= 339, max= 478, avg=405.10, stdev=47.13, samples=20 00:18:47.331 lat (msec) : 250=99.78%, 500=0.22% 00:18:47.331 cpu : usr=0.25%, sys=1.63%, ctx=674, majf=0, minf=4097 00:18:47.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:18:47.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:47.331 issued rwts: total=4117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.331 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:47.331 job10: (groupid=0, jobs=1): err= 0: pid=91248: Tue Nov 26 04:14:47 2024 00:18:47.331 read: IOPS=514, BW=129MiB/s (135MB/s)(1303MiB/10127msec) 00:18:47.331 slat (usec): min=15, max=96177, avg=1878.28, stdev=6693.43 00:18:47.331 clat (msec): min=54, max=292, avg=122.28, stdev=39.79 00:18:47.331 lat (msec): min=54, max=292, avg=124.16, stdev=40.70 00:18:47.331 clat percentiles (msec): 00:18:47.331 | 1.00th=[ 66], 5.00th=[ 78], 10.00th=[ 83], 20.00th=[ 89], 00:18:47.331 | 30.00th=[ 95], 40.00th=[ 100], 50.00th=[ 107], 60.00th=[ 121], 00:18:47.331 | 70.00th=[ 140], 80.00th=[ 165], 90.00th=[ 180], 95.00th=[ 197], 00:18:47.331 | 99.00th=[ 226], 99.50th=[ 236], 99.90th=[ 292], 99.95th=[ 292], 00:18:47.331 | 99.99th=[ 292] 00:18:47.331 bw ( KiB/s): min=82432, max=190464, per=8.86%, avg=131788.55, stdev=36657.54, samples=20 00:18:47.331 iops : min= 322, max= 744, avg=514.65, stdev=143.28, samples=20 00:18:47.331 lat (msec) : 100=40.72%, 250=58.97%, 500=0.31% 00:18:47.331 cpu : usr=0.16%, sys=1.66%, ctx=979, majf=0, minf=4097 00:18:47.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:47.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:47.331 issued rwts: total=5211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.331 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:47.331 00:18:47.331 Run status group 0 (all jobs): 00:18:47.331 READ: bw=1453MiB/s (1523MB/s), 102MiB/s-220MiB/s (107MB/s-231MB/s), io=14.4GiB (15.4GB), run=10058-10133msec 00:18:47.331 00:18:47.331 Disk stats (read/write): 00:18:47.331 nvme0n1: ios=17640/0, merge=0/0, ticks=1225513/0, in_queue=1225513, util=97.01% 00:18:47.331 nvme10n1: ios=9484/0, merge=0/0, ticks=1234601/0, in_queue=1234601, util=97.22% 00:18:47.331 nvme1n1: ios=8367/0, merge=0/0, ticks=1237725/0, in_queue=1237725, util=97.92% 00:18:47.331 nvme2n1: ios=11782/0, merge=0/0, ticks=1237569/0, in_queue=1237569, util=97.82% 00:18:47.331 nvme3n1: ios=12722/0, merge=0/0, ticks=1239786/0, in_queue=1239786, util=97.82% 00:18:47.331 nvme4n1: ios=11192/0, merge=0/0, ticks=1233669/0, in_queue=1233669, util=98.11% 00:18:47.331 nvme5n1: ios=8174/0, merge=0/0, ticks=1232652/0, in_queue=1232652, util=98.34% 00:18:47.331 nvme6n1: ios=9111/0, merge=0/0, ticks=1241465/0, in_queue=1241465, util=98.35% 00:18:47.331 nvme7n1: ios=9505/0, merge=0/0, ticks=1239922/0, in_queue=1239922, util=98.58% 00:18:47.331 nvme8n1: ios=8106/0, merge=0/0, ticks=1239840/0, in_queue=1239840, util=98.90% 00:18:47.331 nvme9n1: ios=10295/0, merge=0/0, ticks=1234384/0, in_queue=1234384, util=98.56% 00:18:47.331 04:14:47 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:47.331 [global] 00:18:47.331 thread=1 00:18:47.331 invalidate=1 00:18:47.332 rw=randwrite 00:18:47.332 time_based=1 00:18:47.332 runtime=10 00:18:47.332 ioengine=libaio 00:18:47.332 direct=1 00:18:47.332 bs=262144 00:18:47.332 iodepth=64 00:18:47.332 norandommap=1 00:18:47.332 numjobs=1 00:18:47.332 00:18:47.332 [job0] 00:18:47.332 filename=/dev/nvme0n1 00:18:47.332 [job1] 00:18:47.332 filename=/dev/nvme10n1 00:18:47.332 [job2] 00:18:47.332 filename=/dev/nvme1n1 00:18:47.332 [job3] 00:18:47.332 filename=/dev/nvme2n1 00:18:47.332 [job4] 00:18:47.332 filename=/dev/nvme3n1 00:18:47.332 [job5] 00:18:47.332 filename=/dev/nvme4n1 00:18:47.332 [job6] 00:18:47.332 filename=/dev/nvme5n1 00:18:47.332 [job7] 00:18:47.332 filename=/dev/nvme6n1 00:18:47.332 [job8] 00:18:47.332 filename=/dev/nvme7n1 00:18:47.332 [job9] 00:18:47.332 filename=/dev/nvme8n1 00:18:47.332 [job10] 00:18:47.332 filename=/dev/nvme9n1 00:18:47.332 Could not set queue depth (nvme0n1) 00:18:47.332 Could not set queue depth (nvme10n1) 00:18:47.332 Could not set queue depth (nvme1n1) 00:18:47.332 Could not set queue depth (nvme2n1) 00:18:47.332 Could not set queue depth (nvme3n1) 00:18:47.332 Could not set queue depth (nvme4n1) 00:18:47.332 Could not set queue depth (nvme5n1) 00:18:47.332 Could not set queue depth (nvme6n1) 00:18:47.332 Could not set queue depth (nvme7n1) 00:18:47.332 Could not set queue depth (nvme8n1) 00:18:47.332 Could not set queue depth (nvme9n1) 00:18:47.332 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:47.332 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:47.332 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:47.332 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:47.332 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:47.332 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:47.332 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:47.332 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:47.332 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:47.332 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:47.332 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:47.332 fio-3.35 00:18:47.332 Starting 11 threads 00:18:57.322 00:18:57.322 job0: (groupid=0, jobs=1): err= 0: pid=91449: Tue Nov 26 04:14:57 2024 00:18:57.322 write: IOPS=653, BW=163MiB/s (171MB/s)(1646MiB/10081msec); 0 zone resets 00:18:57.322 slat (usec): min=18, max=9595, avg=1513.08, stdev=2547.34 00:18:57.323 clat (msec): min=5, max=175, avg=96.43, stdev= 7.30 00:18:57.323 lat (msec): min=5, max=175, avg=97.94, stdev= 6.99 00:18:57.323 clat percentiles (msec): 00:18:57.323 | 1.00th=[ 84], 5.00th=[ 91], 10.00th=[ 92], 20.00th=[ 93], 00:18:57.323 | 30.00th=[ 96], 40.00th=[ 97], 50.00th=[ 97], 60.00th=[ 99], 00:18:57.323 | 70.00th=[ 99], 80.00th=[ 100], 90.00th=[ 101], 95.00th=[ 102], 00:18:57.323 | 99.00th=[ 105], 99.50th=[ 127], 99.90th=[ 165], 99.95th=[ 171], 00:18:57.323 | 99.99th=[ 176] 00:18:57.323 bw ( KiB/s): min=163840, max=170496, per=12.57%, avg=166912.50, stdev=2112.47, samples=20 00:18:57.323 iops : min= 640, max= 666, avg=651.90, stdev= 8.17, samples=20 00:18:57.323 lat (msec) : 10=0.03%, 20=0.12%, 50=0.30%, 100=88.06%, 250=11.48% 00:18:57.323 cpu : usr=1.85%, sys=1.92%, ctx=8311, majf=0, minf=1 00:18:57.323 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:18:57.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.323 issued rwts: total=0,6585,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.323 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.323 job1: (groupid=0, jobs=1): err= 0: pid=91450: Tue Nov 26 04:14:57 2024 00:18:57.323 write: IOPS=197, BW=49.4MiB/s (51.8MB/s)(506MiB/10232msec); 0 zone resets 00:18:57.323 slat (usec): min=19, max=111249, avg=4956.98, stdev=10040.00 00:18:57.323 clat (msec): min=40, max=524, avg=318.74, stdev=44.78 00:18:57.323 lat (msec): min=40, max=524, avg=323.70, stdev=44.25 00:18:57.323 clat percentiles (msec): 00:18:57.323 | 1.00th=[ 103], 5.00th=[ 262], 10.00th=[ 279], 20.00th=[ 300], 00:18:57.323 | 30.00th=[ 313], 40.00th=[ 321], 50.00th=[ 330], 60.00th=[ 334], 00:18:57.323 | 70.00th=[ 338], 80.00th=[ 342], 90.00th=[ 351], 95.00th=[ 359], 00:18:57.323 | 99.00th=[ 426], 99.50th=[ 468], 99.90th=[ 506], 99.95th=[ 527], 00:18:57.323 | 99.99th=[ 527] 00:18:57.323 bw ( KiB/s): min=40960, max=53248, per=3.77%, avg=50119.75, stdev=3142.59, samples=20 00:18:57.323 iops : min= 160, max= 208, avg=195.75, stdev=12.28, samples=20 00:18:57.323 lat (msec) : 50=0.10%, 100=0.79%, 250=2.72%, 500=96.09%, 750=0.30% 00:18:57.323 cpu : usr=0.49%, sys=0.65%, ctx=2150, majf=0, minf=1 00:18:57.323 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:18:57.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.323 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.323 issued rwts: total=0,2022,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.323 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.323 job2: (groupid=0, jobs=1): err= 0: pid=91462: Tue Nov 26 04:14:57 2024 00:18:57.323 write: IOPS=651, BW=163MiB/s (171MB/s)(1642MiB/10086msec); 0 zone resets 00:18:57.323 slat (usec): min=19, max=26477, avg=1516.46, stdev=2582.73 00:18:57.323 clat (msec): min=2, max=179, avg=96.71, stdev= 7.86 00:18:57.323 lat (msec): min=2, max=179, avg=98.23, stdev= 7.60 00:18:57.323 clat percentiles (msec): 00:18:57.323 | 1.00th=[ 89], 5.00th=[ 91], 10.00th=[ 92], 20.00th=[ 93], 00:18:57.323 | 30.00th=[ 96], 40.00th=[ 97], 50.00th=[ 97], 60.00th=[ 99], 00:18:57.323 | 70.00th=[ 100], 80.00th=[ 100], 90.00th=[ 101], 95.00th=[ 102], 00:18:57.323 | 99.00th=[ 118], 99.50th=[ 132], 99.90th=[ 169], 99.95th=[ 174], 00:18:57.323 | 99.99th=[ 180] 00:18:57.323 bw ( KiB/s): min=157067, max=170496, per=12.55%, avg=166612.90, stdev=2771.49, samples=20 00:18:57.323 iops : min= 613, max= 666, avg=650.70, stdev=10.93, samples=20 00:18:57.323 lat (msec) : 4=0.05%, 20=0.12%, 50=0.37%, 100=86.66%, 250=12.80% 00:18:57.323 cpu : usr=1.10%, sys=2.22%, ctx=7971, majf=0, minf=1 00:18:57.323 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:18:57.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.323 issued rwts: total=0,6569,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.323 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.323 job3: (groupid=0, jobs=1): err= 0: pid=91463: Tue Nov 26 04:14:57 2024 00:18:57.323 write: IOPS=191, BW=47.8MiB/s (50.1MB/s)(489MiB/10227msec); 0 zone resets 00:18:57.323 slat (usec): min=25, max=113455, avg=5109.30, stdev=10576.24 00:18:57.323 clat (msec): min=31, max=562, avg=329.47, stdev=43.25 00:18:57.323 lat (msec): min=31, max=562, avg=334.58, stdev=42.39 00:18:57.323 clat percentiles (msec): 00:18:57.323 | 1.00th=[ 197], 5.00th=[ 271], 10.00th=[ 284], 20.00th=[ 309], 00:18:57.323 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 342], 60.00th=[ 347], 00:18:57.323 | 70.00th=[ 347], 80.00th=[ 351], 90.00th=[ 359], 95.00th=[ 363], 00:18:57.323 | 99.00th=[ 464], 99.50th=[ 518], 99.90th=[ 567], 99.95th=[ 567], 00:18:57.323 | 99.99th=[ 567] 00:18:57.323 bw ( KiB/s): min=42581, max=53248, per=3.65%, avg=48430.65, stdev=3446.46, samples=20 00:18:57.323 iops : min= 166, max= 208, avg=189.10, stdev=13.58, samples=20 00:18:57.323 lat (msec) : 50=0.15%, 100=0.82%, 250=1.43%, 500=96.98%, 750=0.61% 00:18:57.323 cpu : usr=0.51%, sys=0.70%, ctx=1704, majf=0, minf=1 00:18:57.323 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:18:57.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.323 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.323 issued rwts: total=0,1955,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.323 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.323 job4: (groupid=0, jobs=1): err= 0: pid=91464: Tue Nov 26 04:14:57 2024 00:18:57.323 write: IOPS=266, BW=66.6MiB/s (69.8MB/s)(680MiB/10217msec); 0 zone resets 00:18:57.323 slat (usec): min=21, max=103378, avg=3643.66, stdev=8314.14 00:18:57.323 clat (msec): min=4, max=524, avg=236.53, stdev=125.99 00:18:57.323 lat (msec): min=4, max=524, avg=240.18, stdev=127.67 00:18:57.323 clat percentiles (msec): 00:18:57.323 | 1.00th=[ 29], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 51], 00:18:57.323 | 30.00th=[ 86], 40.00th=[ 284], 50.00th=[ 309], 60.00th=[ 321], 00:18:57.323 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 342], 95.00th=[ 351], 00:18:57.323 | 99.00th=[ 384], 99.50th=[ 456], 99.90th=[ 498], 99.95th=[ 527], 00:18:57.323 | 99.99th=[ 527] 00:18:57.323 bw ( KiB/s): min=47104, max=321024, per=5.12%, avg=68014.05, stdev=61909.58, samples=20 00:18:57.323 iops : min= 184, max= 1254, avg=265.50, stdev=241.89, samples=20 00:18:57.323 lat (msec) : 10=0.18%, 20=0.44%, 50=17.86%, 100=11.94%, 250=1.47% 00:18:57.323 lat (msec) : 500=68.03%, 750=0.07% 00:18:57.323 cpu : usr=0.62%, sys=0.64%, ctx=3219, majf=0, minf=1 00:18:57.323 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:18:57.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.323 issued rwts: total=0,2721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.323 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.323 job5: (groupid=0, jobs=1): err= 0: pid=91465: Tue Nov 26 04:14:57 2024 00:18:57.323 write: IOPS=1226, BW=307MiB/s (321MB/s)(3079MiB/10041msec); 0 zone resets 00:18:57.323 slat (usec): min=16, max=138412, avg=778.85, stdev=2057.40 00:18:57.323 clat (msec): min=3, max=393, avg=51.39, stdev=29.79 00:18:57.323 lat (msec): min=5, max=396, avg=52.17, stdev=30.18 00:18:57.323 clat percentiles (msec): 00:18:57.323 | 1.00th=[ 31], 5.00th=[ 46], 10.00th=[ 47], 20.00th=[ 47], 00:18:57.323 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 49], 60.00th=[ 50], 00:18:57.323 | 70.00th=[ 50], 80.00th=[ 51], 90.00th=[ 52], 95.00th=[ 53], 00:18:57.323 | 99.00th=[ 228], 99.50th=[ 351], 99.90th=[ 372], 99.95th=[ 384], 00:18:57.323 | 99.99th=[ 388] 00:18:57.323 bw ( KiB/s): min=34816, max=340822, per=23.60%, avg=313394.15, stdev=66787.69, samples=20 00:18:57.323 iops : min= 136, max= 1331, avg=1224.05, stdev=260.85, samples=20 00:18:57.323 lat (msec) : 4=0.01%, 10=0.07%, 20=0.21%, 50=75.26%, 100=23.25% 00:18:57.323 lat (msec) : 250=0.24%, 500=0.95% 00:18:57.324 cpu : usr=3.21%, sys=3.04%, ctx=16559, majf=0, minf=1 00:18:57.324 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:18:57.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.324 issued rwts: total=0,12314,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.324 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.324 job6: (groupid=0, jobs=1): err= 0: pid=91466: Tue Nov 26 04:14:57 2024 00:18:57.324 write: IOPS=185, BW=46.3MiB/s (48.6MB/s)(474MiB/10234msec); 0 zone resets 00:18:57.324 slat (usec): min=22, max=199990, avg=5274.66, stdev=11568.80 00:18:57.324 clat (msec): min=6, max=563, avg=339.75, stdev=39.22 00:18:57.324 lat (msec): min=7, max=563, avg=345.02, stdev=37.65 00:18:57.324 clat percentiles (msec): 00:18:57.324 | 1.00th=[ 247], 5.00th=[ 279], 10.00th=[ 292], 20.00th=[ 317], 00:18:57.324 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 351], 00:18:57.324 | 70.00th=[ 355], 80.00th=[ 359], 90.00th=[ 368], 95.00th=[ 372], 00:18:57.324 | 99.00th=[ 464], 99.50th=[ 523], 99.90th=[ 567], 99.95th=[ 567], 00:18:57.324 | 99.99th=[ 567] 00:18:57.324 bw ( KiB/s): min=35328, max=53248, per=3.53%, avg=46914.90, stdev=4717.67, samples=20 00:18:57.324 iops : min= 138, max= 208, avg=183.20, stdev=18.40, samples=20 00:18:57.324 lat (msec) : 10=0.05%, 50=0.21%, 250=1.00%, 500=98.00%, 750=0.74% 00:18:57.324 cpu : usr=0.35%, sys=0.50%, ctx=2185, majf=0, minf=1 00:18:57.324 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:18:57.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.324 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.324 issued rwts: total=0,1897,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.324 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.324 job7: (groupid=0, jobs=1): err= 0: pid=91467: Tue Nov 26 04:14:57 2024 00:18:57.324 write: IOPS=197, BW=49.3MiB/s (51.7MB/s)(504MiB/10221msec); 0 zone resets 00:18:57.324 slat (usec): min=21, max=100688, avg=4953.53, stdev=9855.54 00:18:57.324 clat (msec): min=51, max=557, avg=319.36, stdev=41.79 00:18:57.324 lat (msec): min=51, max=557, avg=324.32, stdev=41.14 00:18:57.324 clat percentiles (msec): 00:18:57.324 | 1.00th=[ 148], 5.00th=[ 262], 10.00th=[ 279], 20.00th=[ 296], 00:18:57.324 | 30.00th=[ 313], 40.00th=[ 321], 50.00th=[ 326], 60.00th=[ 334], 00:18:57.324 | 70.00th=[ 338], 80.00th=[ 342], 90.00th=[ 351], 95.00th=[ 355], 00:18:57.324 | 99.00th=[ 443], 99.50th=[ 498], 99.90th=[ 535], 99.95th=[ 558], 00:18:57.324 | 99.99th=[ 558] 00:18:57.324 bw ( KiB/s): min=45056, max=53248, per=3.76%, avg=49986.70, stdev=2882.73, samples=20 00:18:57.324 iops : min= 176, max= 208, avg=195.20, stdev=11.26, samples=20 00:18:57.324 lat (msec) : 100=0.40%, 250=2.88%, 500=96.23%, 750=0.50% 00:18:57.324 cpu : usr=0.46%, sys=0.76%, ctx=1764, majf=0, minf=1 00:18:57.324 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:18:57.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.324 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.324 issued rwts: total=0,2016,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.324 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.324 job8: (groupid=0, jobs=1): err= 0: pid=91468: Tue Nov 26 04:14:57 2024 00:18:57.324 write: IOPS=196, BW=49.2MiB/s (51.6MB/s)(503MiB/10229msec); 0 zone resets 00:18:57.324 slat (usec): min=23, max=89445, avg=4966.63, stdev=9910.01 00:18:57.324 clat (msec): min=34, max=568, avg=320.25, stdev=47.36 00:18:57.324 lat (msec): min=34, max=568, avg=325.21, stdev=46.86 00:18:57.324 clat percentiles (msec): 00:18:57.324 | 1.00th=[ 105], 5.00th=[ 266], 10.00th=[ 284], 20.00th=[ 300], 00:18:57.324 | 30.00th=[ 313], 40.00th=[ 321], 50.00th=[ 330], 60.00th=[ 334], 00:18:57.324 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 355], 95.00th=[ 359], 00:18:57.324 | 99.00th=[ 468], 99.50th=[ 506], 99.90th=[ 550], 99.95th=[ 567], 00:18:57.324 | 99.99th=[ 567] 00:18:57.324 bw ( KiB/s): min=43008, max=55185, per=3.76%, avg=49889.00, stdev=3230.05, samples=20 00:18:57.324 iops : min= 168, max= 215, avg=194.80, stdev=12.53, samples=20 00:18:57.324 lat (msec) : 50=0.20%, 100=0.80%, 250=2.63%, 500=95.68%, 750=0.70% 00:18:57.324 cpu : usr=0.50%, sys=0.59%, ctx=2004, majf=0, minf=1 00:18:57.324 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:18:57.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.324 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.324 issued rwts: total=0,2012,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.324 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.324 job9: (groupid=0, jobs=1): err= 0: pid=91469: Tue Nov 26 04:14:57 2024 00:18:57.324 write: IOPS=1288, BW=322MiB/s (338MB/s)(3234MiB/10043msec); 0 zone resets 00:18:57.324 slat (usec): min=22, max=5835, avg=768.46, stdev=1276.17 00:18:57.324 clat (usec): min=6837, max=89741, avg=48902.52, stdev=2993.48 00:18:57.324 lat (usec): min=6863, max=89797, avg=49670.99, stdev=3070.95 00:18:57.324 clat percentiles (usec): 00:18:57.324 | 1.00th=[45351], 5.00th=[45876], 10.00th=[46400], 20.00th=[46924], 00:18:57.324 | 30.00th=[47449], 40.00th=[47973], 50.00th=[48497], 60.00th=[49021], 00:18:57.324 | 70.00th=[50070], 80.00th=[50594], 90.00th=[51643], 95.00th=[52691], 00:18:57.324 | 99.00th=[55313], 99.50th=[56361], 99.90th=[80217], 99.95th=[83362], 00:18:57.324 | 99.99th=[86508] 00:18:57.324 bw ( KiB/s): min=317440, max=337920, per=24.81%, avg=329483.60, stdev=5093.58, samples=20 00:18:57.324 iops : min= 1240, max= 1320, avg=1287.00, stdev=19.93, samples=20 00:18:57.324 lat (msec) : 10=0.03%, 20=0.09%, 50=72.43%, 100=27.45% 00:18:57.324 cpu : usr=3.44%, sys=2.97%, ctx=15224, majf=0, minf=1 00:18:57.324 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:18:57.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.324 issued rwts: total=0,12936,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.324 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.324 job10: (groupid=0, jobs=1): err= 0: pid=91470: Tue Nov 26 04:14:57 2024 00:18:57.324 write: IOPS=201, BW=50.5MiB/s (52.9MB/s)(516MiB/10228msec); 0 zone resets 00:18:57.324 slat (usec): min=23, max=89748, avg=4841.02, stdev=9454.50 00:18:57.324 clat (msec): min=51, max=521, avg=311.97, stdev=42.09 00:18:57.324 lat (msec): min=51, max=521, avg=316.81, stdev=41.62 00:18:57.324 clat percentiles (msec): 00:18:57.324 | 1.00th=[ 110], 5.00th=[ 259], 10.00th=[ 275], 20.00th=[ 296], 00:18:57.324 | 30.00th=[ 309], 40.00th=[ 317], 50.00th=[ 321], 60.00th=[ 326], 00:18:57.324 | 70.00th=[ 330], 80.00th=[ 334], 90.00th=[ 338], 95.00th=[ 347], 00:18:57.324 | 99.00th=[ 426], 99.50th=[ 464], 99.90th=[ 502], 99.95th=[ 523], 00:18:57.324 | 99.99th=[ 523] 00:18:57.324 bw ( KiB/s): min=47104, max=53248, per=3.86%, avg=51251.20, stdev=1635.20, samples=20 00:18:57.324 iops : min= 184, max= 208, avg=200.20, stdev= 6.39, samples=20 00:18:57.324 lat (msec) : 100=0.97%, 250=3.10%, 500=95.64%, 750=0.29% 00:18:57.324 cpu : usr=0.40%, sys=0.54%, ctx=2352, majf=0, minf=1 00:18:57.324 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=96.9% 00:18:57.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.324 issued rwts: total=0,2065,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.324 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.324 00:18:57.324 Run status group 0 (all jobs): 00:18:57.324 WRITE: bw=1297MiB/s (1360MB/s), 46.3MiB/s-322MiB/s (48.6MB/s-338MB/s), io=13.0GiB (13.9GB), run=10041-10234msec 00:18:57.324 00:18:57.324 Disk stats (read/write): 00:18:57.324 nvme0n1: ios=49/13040, merge=0/0, ticks=54/1215786, in_queue=1215840, util=97.83% 00:18:57.324 nvme10n1: ios=49/3914, merge=0/0, ticks=50/1201716, in_queue=1201766, util=98.03% 00:18:57.324 nvme1n1: ios=29/13016, merge=0/0, ticks=41/1216005, in_queue=1216046, util=98.13% 00:18:57.325 nvme2n1: ios=15/3785, merge=0/0, ticks=28/1199236, in_queue=1199264, util=98.06% 00:18:57.325 nvme3n1: ios=5/5311, merge=0/0, ticks=10/1201430, in_queue=1201440, util=97.92% 00:18:57.325 nvme4n1: ios=18/24452, merge=0/0, ticks=141/1220720, in_queue=1220861, util=98.25% 00:18:57.325 nvme5n1: ios=0/3664, merge=0/0, ticks=0/1197433, in_queue=1197433, util=98.39% 00:18:57.325 nvme6n1: ios=0/3905, merge=0/0, ticks=0/1199706, in_queue=1199706, util=98.36% 00:18:57.325 nvme7n1: ios=0/3898, merge=0/0, ticks=0/1197611, in_queue=1197611, util=98.73% 00:18:57.325 nvme8n1: ios=0/25730, merge=0/0, ticks=0/1220211, in_queue=1220211, util=98.87% 00:18:57.325 nvme9n1: ios=0/4001, merge=0/0, ticks=0/1202197, in_queue=1202197, util=98.88% 00:18:57.325 04:14:57 -- target/multiconnection.sh@36 -- # sync 00:18:57.325 04:14:57 -- target/multiconnection.sh@37 -- # seq 1 11 00:18:57.325 04:14:57 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:57.325 04:14:57 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:57.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:57.325 04:14:57 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:18:57.325 04:14:57 -- common/autotest_common.sh@1208 -- # local i=0 00:18:57.325 04:14:57 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:57.325 04:14:57 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:18:57.325 04:14:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:57.325 04:14:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:18:57.325 04:14:58 -- common/autotest_common.sh@1220 -- # return 0 00:18:57.325 04:14:58 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:57.325 04:14:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.325 04:14:58 -- common/autotest_common.sh@10 -- # set +x 00:18:57.325 04:14:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.325 04:14:58 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:57.325 04:14:58 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:18:57.325 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:18:57.325 04:14:58 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:18:57.325 04:14:58 -- common/autotest_common.sh@1208 -- # local i=0 00:18:57.325 04:14:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:57.325 04:14:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:18:57.325 04:14:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:57.325 04:14:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:18:57.325 04:14:58 -- common/autotest_common.sh@1220 -- # return 0 00:18:57.325 04:14:58 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:57.325 04:14:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.325 04:14:58 -- common/autotest_common.sh@10 -- # set +x 00:18:57.325 04:14:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.325 04:14:58 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:57.325 04:14:58 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:18:57.325 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:18:57.325 04:14:58 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:18:57.325 04:14:58 -- common/autotest_common.sh@1208 -- # local i=0 00:18:57.325 04:14:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:57.325 04:14:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:18:57.325 04:14:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:18:57.325 04:14:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:57.325 04:14:58 -- common/autotest_common.sh@1220 -- # return 0 00:18:57.325 04:14:58 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:57.325 04:14:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.325 04:14:58 -- common/autotest_common.sh@10 -- # set +x 00:18:57.325 04:14:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.325 04:14:58 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:57.325 04:14:58 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:18:57.325 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:18:57.325 04:14:58 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:18:57.325 04:14:58 -- common/autotest_common.sh@1208 -- # local i=0 00:18:57.325 04:14:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:57.325 04:14:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:18:57.325 04:14:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:57.325 04:14:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:18:57.325 04:14:58 -- common/autotest_common.sh@1220 -- # return 0 00:18:57.325 04:14:58 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:18:57.325 04:14:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.325 04:14:58 -- common/autotest_common.sh@10 -- # set +x 00:18:57.325 04:14:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.325 04:14:58 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:57.325 04:14:58 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:18:57.325 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:18:57.325 04:14:58 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:18:57.325 04:14:58 -- common/autotest_common.sh@1208 -- # local i=0 00:18:57.325 04:14:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:57.325 04:14:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:18:57.325 04:14:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:57.325 04:14:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:18:57.325 04:14:58 -- common/autotest_common.sh@1220 -- # return 0 00:18:57.325 04:14:58 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:18:57.325 04:14:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.325 04:14:58 -- common/autotest_common.sh@10 -- # set +x 00:18:57.325 04:14:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.325 04:14:58 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:57.325 04:14:58 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:18:57.325 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:18:57.325 04:14:58 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:18:57.325 04:14:58 -- common/autotest_common.sh@1208 -- # local i=0 00:18:57.325 04:14:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:57.325 04:14:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:18:57.325 04:14:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:57.325 04:14:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:18:57.325 04:14:58 -- common/autotest_common.sh@1220 -- # return 0 00:18:57.325 04:14:58 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:18:57.325 04:14:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.325 04:14:58 -- common/autotest_common.sh@10 -- # set +x 00:18:57.325 04:14:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.325 04:14:58 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:57.325 04:14:58 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:18:57.325 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:18:57.325 04:14:58 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:18:57.325 04:14:58 -- common/autotest_common.sh@1208 -- # local i=0 00:18:57.325 04:14:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:57.325 04:14:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:18:57.325 04:14:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:57.325 04:14:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:18:57.325 04:14:58 -- common/autotest_common.sh@1220 -- # return 0 00:18:57.325 04:14:58 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:18:57.325 04:14:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.326 04:14:58 -- common/autotest_common.sh@10 -- # set +x 00:18:57.326 04:14:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.326 04:14:58 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:57.326 04:14:58 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:18:57.326 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:18:57.326 04:14:58 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:18:57.326 04:14:58 -- common/autotest_common.sh@1208 -- # local i=0 00:18:57.326 04:14:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:57.326 04:14:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:18:57.326 04:14:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:57.326 04:14:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:18:57.326 04:14:58 -- common/autotest_common.sh@1220 -- # return 0 00:18:57.326 04:14:58 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:18:57.326 04:14:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.326 04:14:58 -- common/autotest_common.sh@10 -- # set +x 00:18:57.326 04:14:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.326 04:14:58 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:57.326 04:14:58 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:18:57.326 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:18:57.326 04:14:58 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:18:57.326 04:14:58 -- common/autotest_common.sh@1208 -- # local i=0 00:18:57.326 04:14:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:57.326 04:14:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:18:57.326 04:14:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:57.326 04:14:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:18:57.326 04:14:58 -- common/autotest_common.sh@1220 -- # return 0 00:18:57.326 04:14:58 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:18:57.326 04:14:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.326 04:14:58 -- common/autotest_common.sh@10 -- # set +x 00:18:57.326 04:14:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.326 04:14:58 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:57.326 04:14:58 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:18:57.326 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:18:57.326 04:14:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:18:57.326 04:14:59 -- common/autotest_common.sh@1208 -- # local i=0 00:18:57.326 04:14:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:57.326 04:14:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:18:57.326 04:14:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:18:57.326 04:14:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:57.326 04:14:59 -- common/autotest_common.sh@1220 -- # return 0 00:18:57.326 04:14:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:18:57.326 04:14:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.326 04:14:59 -- common/autotest_common.sh@10 -- # set +x 00:18:57.584 04:14:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.584 04:14:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:57.584 04:14:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:18:57.584 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:18:57.584 04:14:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:18:57.584 04:14:59 -- common/autotest_common.sh@1208 -- # local i=0 00:18:57.584 04:14:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:57.584 04:14:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:18:57.584 04:14:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:18:57.584 04:14:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:57.584 04:14:59 -- common/autotest_common.sh@1220 -- # return 0 00:18:57.584 04:14:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:18:57.584 04:14:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.584 04:14:59 -- common/autotest_common.sh@10 -- # set +x 00:18:57.584 04:14:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.584 04:14:59 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:18:57.584 04:14:59 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:57.584 04:14:59 -- target/multiconnection.sh@47 -- # nvmftestfini 00:18:57.584 04:14:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:57.584 04:14:59 -- nvmf/common.sh@116 -- # sync 00:18:57.584 04:14:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:57.584 04:14:59 -- nvmf/common.sh@119 -- # set +e 00:18:57.584 04:14:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:57.584 04:14:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:57.584 rmmod nvme_tcp 00:18:57.584 rmmod nvme_fabrics 00:18:57.584 rmmod nvme_keyring 00:18:57.584 04:14:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:57.584 04:14:59 -- nvmf/common.sh@123 -- # set -e 00:18:57.584 04:14:59 -- nvmf/common.sh@124 -- # return 0 00:18:57.584 04:14:59 -- nvmf/common.sh@477 -- # '[' -n 90755 ']' 00:18:57.584 04:14:59 -- nvmf/common.sh@478 -- # killprocess 90755 00:18:57.584 04:14:59 -- common/autotest_common.sh@936 -- # '[' -z 90755 ']' 00:18:57.584 04:14:59 -- common/autotest_common.sh@940 -- # kill -0 90755 00:18:57.584 04:14:59 -- common/autotest_common.sh@941 -- # uname 00:18:57.584 04:14:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:57.584 04:14:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90755 00:18:57.584 killing process with pid 90755 00:18:57.584 04:14:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:57.584 04:14:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:57.584 04:14:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90755' 00:18:57.584 04:14:59 -- common/autotest_common.sh@955 -- # kill 90755 00:18:57.584 04:14:59 -- common/autotest_common.sh@960 -- # wait 90755 00:18:58.151 04:14:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:58.151 04:14:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:58.151 04:14:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:58.151 04:14:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:58.151 04:14:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:58.151 04:14:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.151 04:14:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.151 04:14:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.151 04:14:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:58.151 ************************************ 00:18:58.151 END TEST nvmf_multiconnection 00:18:58.151 ************************************ 00:18:58.151 00:18:58.151 real 0m50.174s 00:18:58.151 user 2m51.103s 00:18:58.151 sys 0m23.549s 00:18:58.151 04:14:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:58.151 04:14:59 -- common/autotest_common.sh@10 -- # set +x 00:18:58.151 04:14:59 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:58.151 04:14:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:58.151 04:14:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:58.151 04:14:59 -- common/autotest_common.sh@10 -- # set +x 00:18:58.151 ************************************ 00:18:58.151 START TEST nvmf_initiator_timeout 00:18:58.151 ************************************ 00:18:58.151 04:14:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:58.411 * Looking for test storage... 00:18:58.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:58.411 04:14:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:58.411 04:14:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:58.411 04:14:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:58.411 04:15:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:58.411 04:15:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:58.411 04:15:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:58.411 04:15:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:58.411 04:15:00 -- scripts/common.sh@335 -- # IFS=.-: 00:18:58.411 04:15:00 -- scripts/common.sh@335 -- # read -ra ver1 00:18:58.411 04:15:00 -- scripts/common.sh@336 -- # IFS=.-: 00:18:58.411 04:15:00 -- scripts/common.sh@336 -- # read -ra ver2 00:18:58.411 04:15:00 -- scripts/common.sh@337 -- # local 'op=<' 00:18:58.411 04:15:00 -- scripts/common.sh@339 -- # ver1_l=2 00:18:58.411 04:15:00 -- scripts/common.sh@340 -- # ver2_l=1 00:18:58.411 04:15:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:58.411 04:15:00 -- scripts/common.sh@343 -- # case "$op" in 00:18:58.411 04:15:00 -- scripts/common.sh@344 -- # : 1 00:18:58.411 04:15:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:58.411 04:15:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.411 04:15:00 -- scripts/common.sh@364 -- # decimal 1 00:18:58.411 04:15:00 -- scripts/common.sh@352 -- # local d=1 00:18:58.411 04:15:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:58.411 04:15:00 -- scripts/common.sh@354 -- # echo 1 00:18:58.411 04:15:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:58.411 04:15:00 -- scripts/common.sh@365 -- # decimal 2 00:18:58.411 04:15:00 -- scripts/common.sh@352 -- # local d=2 00:18:58.411 04:15:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:58.411 04:15:00 -- scripts/common.sh@354 -- # echo 2 00:18:58.411 04:15:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:58.411 04:15:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:58.411 04:15:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:58.411 04:15:00 -- scripts/common.sh@367 -- # return 0 00:18:58.411 04:15:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:58.411 04:15:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:58.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.411 --rc genhtml_branch_coverage=1 00:18:58.411 --rc genhtml_function_coverage=1 00:18:58.411 --rc genhtml_legend=1 00:18:58.411 --rc geninfo_all_blocks=1 00:18:58.411 --rc geninfo_unexecuted_blocks=1 00:18:58.411 00:18:58.411 ' 00:18:58.411 04:15:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:58.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.411 --rc genhtml_branch_coverage=1 00:18:58.411 --rc genhtml_function_coverage=1 00:18:58.411 --rc genhtml_legend=1 00:18:58.411 --rc geninfo_all_blocks=1 00:18:58.411 --rc geninfo_unexecuted_blocks=1 00:18:58.411 00:18:58.411 ' 00:18:58.411 04:15:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:58.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.411 --rc genhtml_branch_coverage=1 00:18:58.411 --rc genhtml_function_coverage=1 00:18:58.411 --rc genhtml_legend=1 00:18:58.411 --rc geninfo_all_blocks=1 00:18:58.411 --rc geninfo_unexecuted_blocks=1 00:18:58.411 00:18:58.411 ' 00:18:58.411 04:15:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:58.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.411 --rc genhtml_branch_coverage=1 00:18:58.411 --rc genhtml_function_coverage=1 00:18:58.411 --rc genhtml_legend=1 00:18:58.411 --rc geninfo_all_blocks=1 00:18:58.411 --rc geninfo_unexecuted_blocks=1 00:18:58.411 00:18:58.411 ' 00:18:58.411 04:15:00 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:58.411 04:15:00 -- nvmf/common.sh@7 -- # uname -s 00:18:58.411 04:15:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:58.411 04:15:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:58.411 04:15:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:58.411 04:15:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:58.411 04:15:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:58.411 04:15:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:58.411 04:15:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:58.411 04:15:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:58.411 04:15:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:58.411 04:15:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:58.411 04:15:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:18:58.411 04:15:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:18:58.411 04:15:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:58.411 04:15:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:58.411 04:15:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:58.411 04:15:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:58.411 04:15:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:58.411 04:15:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:58.411 04:15:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:58.411 04:15:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.411 04:15:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.411 04:15:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.411 04:15:00 -- paths/export.sh@5 -- # export PATH 00:18:58.411 04:15:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.411 04:15:00 -- nvmf/common.sh@46 -- # : 0 00:18:58.411 04:15:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:58.411 04:15:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:58.411 04:15:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:58.412 04:15:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:58.412 04:15:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:58.412 04:15:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:58.412 04:15:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:58.412 04:15:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:58.412 04:15:00 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:58.412 04:15:00 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:58.412 04:15:00 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:18:58.412 04:15:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:58.412 04:15:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:58.412 04:15:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:58.412 04:15:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:58.412 04:15:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:58.412 04:15:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.412 04:15:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.412 04:15:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.412 04:15:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:58.412 04:15:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:58.412 04:15:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:58.412 04:15:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:58.412 04:15:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:58.412 04:15:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:58.412 04:15:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:58.412 04:15:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:58.412 04:15:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:58.412 04:15:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:58.412 04:15:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:58.412 04:15:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:58.412 04:15:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:58.412 04:15:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:58.412 04:15:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:58.412 04:15:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:58.412 04:15:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:58.412 04:15:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:58.412 04:15:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:58.412 04:15:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:58.412 Cannot find device "nvmf_tgt_br" 00:18:58.412 04:15:00 -- nvmf/common.sh@154 -- # true 00:18:58.412 04:15:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:58.412 Cannot find device "nvmf_tgt_br2" 00:18:58.412 04:15:00 -- nvmf/common.sh@155 -- # true 00:18:58.412 04:15:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:58.412 04:15:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:58.412 Cannot find device "nvmf_tgt_br" 00:18:58.412 04:15:00 -- nvmf/common.sh@157 -- # true 00:18:58.412 04:15:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:58.412 Cannot find device "nvmf_tgt_br2" 00:18:58.412 04:15:00 -- nvmf/common.sh@158 -- # true 00:18:58.412 04:15:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:58.671 04:15:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:58.671 04:15:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:58.671 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:58.671 04:15:00 -- nvmf/common.sh@161 -- # true 00:18:58.671 04:15:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:58.671 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:58.671 04:15:00 -- nvmf/common.sh@162 -- # true 00:18:58.671 04:15:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:58.671 04:15:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:58.671 04:15:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:58.671 04:15:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:58.671 04:15:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:58.671 04:15:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:58.671 04:15:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:58.671 04:15:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:58.671 04:15:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:58.671 04:15:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:58.671 04:15:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:58.671 04:15:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:58.671 04:15:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:58.671 04:15:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:58.671 04:15:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:58.671 04:15:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:58.671 04:15:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:58.671 04:15:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:58.671 04:15:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:58.671 04:15:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:58.671 04:15:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:58.671 04:15:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:58.671 04:15:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:58.671 04:15:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:58.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:58.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:18:58.671 00:18:58.671 --- 10.0.0.2 ping statistics --- 00:18:58.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.671 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:58.671 04:15:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:58.671 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:58.671 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:18:58.671 00:18:58.671 --- 10.0.0.3 ping statistics --- 00:18:58.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.671 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:18:58.671 04:15:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:58.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:58.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:58.930 00:18:58.930 --- 10.0.0.1 ping statistics --- 00:18:58.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.930 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:58.930 04:15:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:58.930 04:15:00 -- nvmf/common.sh@421 -- # return 0 00:18:58.930 04:15:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:58.930 04:15:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:58.930 04:15:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:58.930 04:15:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:58.930 04:15:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:58.930 04:15:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:58.930 04:15:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:58.930 04:15:00 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:18:58.930 04:15:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:58.930 04:15:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:58.930 04:15:00 -- common/autotest_common.sh@10 -- # set +x 00:18:58.930 04:15:00 -- nvmf/common.sh@469 -- # nvmfpid=91848 00:18:58.930 04:15:00 -- nvmf/common.sh@470 -- # waitforlisten 91848 00:18:58.930 04:15:00 -- common/autotest_common.sh@829 -- # '[' -z 91848 ']' 00:18:58.930 04:15:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:58.930 04:15:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.930 04:15:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:58.930 04:15:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.930 04:15:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:58.930 04:15:00 -- common/autotest_common.sh@10 -- # set +x 00:18:58.930 [2024-11-26 04:15:00.512581] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:58.930 [2024-11-26 04:15:00.512665] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.930 [2024-11-26 04:15:00.651783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:59.189 [2024-11-26 04:15:00.724525] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:59.189 [2024-11-26 04:15:00.725023] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.189 [2024-11-26 04:15:00.725177] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.189 [2024-11-26 04:15:00.725337] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.189 [2024-11-26 04:15:00.725596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.189 [2024-11-26 04:15:00.725771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.189 [2024-11-26 04:15:00.726103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:59.189 [2024-11-26 04:15:00.726116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.125 04:15:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:00.125 04:15:01 -- common/autotest_common.sh@862 -- # return 0 00:19:00.125 04:15:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:00.125 04:15:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:00.125 04:15:01 -- common/autotest_common.sh@10 -- # set +x 00:19:00.125 04:15:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.125 04:15:01 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:00.125 04:15:01 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:00.125 04:15:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.125 04:15:01 -- common/autotest_common.sh@10 -- # set +x 00:19:00.125 Malloc0 00:19:00.125 04:15:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.125 04:15:01 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:00.125 04:15:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.125 04:15:01 -- common/autotest_common.sh@10 -- # set +x 00:19:00.125 Delay0 00:19:00.125 04:15:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.125 04:15:01 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:00.125 04:15:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.125 04:15:01 -- common/autotest_common.sh@10 -- # set +x 00:19:00.125 [2024-11-26 04:15:01.641491] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.125 04:15:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.125 04:15:01 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:00.125 04:15:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.125 04:15:01 -- common/autotest_common.sh@10 -- # set +x 00:19:00.125 04:15:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.125 04:15:01 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:00.125 04:15:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.125 04:15:01 -- common/autotest_common.sh@10 -- # set +x 00:19:00.125 04:15:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.125 04:15:01 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:00.125 04:15:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.125 04:15:01 -- common/autotest_common.sh@10 -- # set +x 00:19:00.125 [2024-11-26 04:15:01.673767] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.125 04:15:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.126 04:15:01 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:00.126 04:15:01 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:00.126 04:15:01 -- common/autotest_common.sh@1187 -- # local i=0 00:19:00.126 04:15:01 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:00.126 04:15:01 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:00.126 04:15:01 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:02.661 04:15:03 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:02.661 04:15:03 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:02.661 04:15:03 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:02.661 04:15:03 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:02.661 04:15:03 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:02.661 04:15:03 -- common/autotest_common.sh@1197 -- # return 0 00:19:02.661 04:15:03 -- target/initiator_timeout.sh@35 -- # fio_pid=91930 00:19:02.661 04:15:03 -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:02.661 04:15:03 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:02.661 [global] 00:19:02.661 thread=1 00:19:02.661 invalidate=1 00:19:02.661 rw=write 00:19:02.661 time_based=1 00:19:02.661 runtime=60 00:19:02.661 ioengine=libaio 00:19:02.661 direct=1 00:19:02.661 bs=4096 00:19:02.661 iodepth=1 00:19:02.661 norandommap=0 00:19:02.661 numjobs=1 00:19:02.661 00:19:02.661 verify_dump=1 00:19:02.661 verify_backlog=512 00:19:02.661 verify_state_save=0 00:19:02.661 do_verify=1 00:19:02.661 verify=crc32c-intel 00:19:02.661 [job0] 00:19:02.661 filename=/dev/nvme0n1 00:19:02.661 Could not set queue depth (nvme0n1) 00:19:02.661 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:02.661 fio-3.35 00:19:02.661 Starting 1 thread 00:19:05.290 04:15:06 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:05.290 04:15:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.290 04:15:06 -- common/autotest_common.sh@10 -- # set +x 00:19:05.290 true 00:19:05.290 04:15:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.290 04:15:06 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:05.290 04:15:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.290 04:15:06 -- common/autotest_common.sh@10 -- # set +x 00:19:05.290 true 00:19:05.290 04:15:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.290 04:15:06 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:05.290 04:15:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.290 04:15:06 -- common/autotest_common.sh@10 -- # set +x 00:19:05.290 true 00:19:05.290 04:15:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.290 04:15:06 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:05.290 04:15:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.290 04:15:06 -- common/autotest_common.sh@10 -- # set +x 00:19:05.290 true 00:19:05.290 04:15:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.290 04:15:06 -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:08.572 04:15:09 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:08.572 04:15:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.572 04:15:09 -- common/autotest_common.sh@10 -- # set +x 00:19:08.572 true 00:19:08.572 04:15:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.572 04:15:09 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:08.572 04:15:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.572 04:15:09 -- common/autotest_common.sh@10 -- # set +x 00:19:08.572 true 00:19:08.572 04:15:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.572 04:15:09 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:08.572 04:15:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.572 04:15:09 -- common/autotest_common.sh@10 -- # set +x 00:19:08.572 true 00:19:08.572 04:15:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.572 04:15:09 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:08.572 04:15:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.572 04:15:09 -- common/autotest_common.sh@10 -- # set +x 00:19:08.572 true 00:19:08.572 04:15:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.572 04:15:09 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:08.572 04:15:09 -- target/initiator_timeout.sh@54 -- # wait 91930 00:20:04.883 00:20:04.883 job0: (groupid=0, jobs=1): err= 0: pid=91951: Tue Nov 26 04:16:04 2024 00:20:04.883 read: IOPS=844, BW=3379KiB/s (3460kB/s)(198MiB/60000msec) 00:20:04.883 slat (nsec): min=9965, max=87424, avg=13381.74, stdev=3729.33 00:20:04.883 clat (usec): min=150, max=2335, avg=194.68, stdev=22.87 00:20:04.883 lat (usec): min=163, max=2352, avg=208.07, stdev=23.25 00:20:04.883 clat percentiles (usec): 00:20:04.883 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 180], 00:20:04.883 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:20:04.883 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 219], 95.00th=[ 227], 00:20:04.883 | 99.00th=[ 245], 99.50th=[ 253], 99.90th=[ 281], 99.95th=[ 310], 00:20:04.883 | 99.99th=[ 660] 00:20:04.883 write: IOPS=846, BW=3386KiB/s (3467kB/s)(198MiB/60000msec); 0 zone resets 00:20:04.883 slat (usec): min=15, max=10075, avg=20.55, stdev=56.18 00:20:04.883 clat (usec): min=115, max=40571k, avg=950.54, stdev=180029.98 00:20:04.883 lat (usec): min=134, max=40571k, avg=971.09, stdev=180029.97 00:20:04.883 clat percentiles (usec): 00:20:04.883 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:20:04.883 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 153], 00:20:04.883 | 70.00th=[ 157], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 180], 00:20:04.883 | 99.00th=[ 202], 99.50th=[ 215], 99.90th=[ 253], 99.95th=[ 269], 00:20:04.883 | 99.99th=[ 506] 00:20:04.883 bw ( KiB/s): min= 3640, max=12288, per=100.00%, avg=10187.49, stdev=1751.33, samples=39 00:20:04.883 iops : min= 910, max= 3072, avg=2546.87, stdev=437.83, samples=39 00:20:04.883 lat (usec) : 250=99.63%, 500=0.35%, 750=0.01%, 1000=0.01% 00:20:04.883 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:20:04.883 cpu : usr=0.49%, sys=2.03%, ctx=101479, majf=0, minf=5 00:20:04.883 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:04.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:04.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:04.883 issued rwts: total=50688,50786,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:04.883 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:04.883 00:20:04.883 Run status group 0 (all jobs): 00:20:04.883 READ: bw=3379KiB/s (3460kB/s), 3379KiB/s-3379KiB/s (3460kB/s-3460kB/s), io=198MiB (208MB), run=60000-60000msec 00:20:04.883 WRITE: bw=3386KiB/s (3467kB/s), 3386KiB/s-3386KiB/s (3467kB/s-3467kB/s), io=198MiB (208MB), run=60000-60000msec 00:20:04.883 00:20:04.883 Disk stats (read/write): 00:20:04.883 nvme0n1: ios=50648/50688, merge=0/0, ticks=10184/8290, in_queue=18474, util=99.63% 00:20:04.883 04:16:04 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:04.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:04.883 04:16:04 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:04.883 04:16:04 -- common/autotest_common.sh@1208 -- # local i=0 00:20:04.883 04:16:04 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:04.883 04:16:04 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:04.883 04:16:04 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:04.883 04:16:04 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:04.883 04:16:04 -- common/autotest_common.sh@1220 -- # return 0 00:20:04.883 04:16:04 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:04.883 04:16:04 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:04.883 nvmf hotplug test: fio successful as expected 00:20:04.883 04:16:04 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:04.883 04:16:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.883 04:16:04 -- common/autotest_common.sh@10 -- # set +x 00:20:04.883 04:16:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.883 04:16:04 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:04.883 04:16:04 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:04.883 04:16:04 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:04.883 04:16:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:04.883 04:16:04 -- nvmf/common.sh@116 -- # sync 00:20:04.883 04:16:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:04.883 04:16:04 -- nvmf/common.sh@119 -- # set +e 00:20:04.883 04:16:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:04.883 04:16:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:04.883 rmmod nvme_tcp 00:20:04.883 rmmod nvme_fabrics 00:20:04.883 rmmod nvme_keyring 00:20:04.883 04:16:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:04.883 04:16:04 -- nvmf/common.sh@123 -- # set -e 00:20:04.883 04:16:04 -- nvmf/common.sh@124 -- # return 0 00:20:04.883 04:16:04 -- nvmf/common.sh@477 -- # '[' -n 91848 ']' 00:20:04.883 04:16:04 -- nvmf/common.sh@478 -- # killprocess 91848 00:20:04.883 04:16:04 -- common/autotest_common.sh@936 -- # '[' -z 91848 ']' 00:20:04.883 04:16:04 -- common/autotest_common.sh@940 -- # kill -0 91848 00:20:04.883 04:16:04 -- common/autotest_common.sh@941 -- # uname 00:20:04.883 04:16:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:04.883 04:16:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91848 00:20:04.883 04:16:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:04.883 04:16:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:04.883 04:16:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91848' 00:20:04.883 killing process with pid 91848 00:20:04.883 04:16:04 -- common/autotest_common.sh@955 -- # kill 91848 00:20:04.883 04:16:04 -- common/autotest_common.sh@960 -- # wait 91848 00:20:04.883 04:16:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:04.883 04:16:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:04.883 04:16:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:04.883 04:16:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:04.883 04:16:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:04.883 04:16:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.883 04:16:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.883 04:16:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.883 04:16:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:04.883 00:20:04.883 real 1m4.770s 00:20:04.883 user 4m7.608s 00:20:04.883 sys 0m7.644s 00:20:04.883 04:16:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:04.883 04:16:04 -- common/autotest_common.sh@10 -- # set +x 00:20:04.883 ************************************ 00:20:04.883 END TEST nvmf_initiator_timeout 00:20:04.883 ************************************ 00:20:04.883 04:16:04 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:20:04.883 04:16:04 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:04.883 04:16:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:04.883 04:16:04 -- common/autotest_common.sh@10 -- # set +x 00:20:04.883 04:16:04 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:04.883 04:16:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:04.883 04:16:04 -- common/autotest_common.sh@10 -- # set +x 00:20:04.883 04:16:04 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:04.883 04:16:04 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:04.883 04:16:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:04.883 04:16:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:04.883 04:16:04 -- common/autotest_common.sh@10 -- # set +x 00:20:04.883 ************************************ 00:20:04.883 START TEST nvmf_multicontroller 00:20:04.883 ************************************ 00:20:04.883 04:16:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:04.883 * Looking for test storage... 00:20:04.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:04.883 04:16:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:04.884 04:16:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:04.884 04:16:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:04.884 04:16:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:04.884 04:16:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:04.884 04:16:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:04.884 04:16:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:04.884 04:16:04 -- scripts/common.sh@335 -- # IFS=.-: 00:20:04.884 04:16:04 -- scripts/common.sh@335 -- # read -ra ver1 00:20:04.884 04:16:04 -- scripts/common.sh@336 -- # IFS=.-: 00:20:04.884 04:16:04 -- scripts/common.sh@336 -- # read -ra ver2 00:20:04.884 04:16:04 -- scripts/common.sh@337 -- # local 'op=<' 00:20:04.884 04:16:04 -- scripts/common.sh@339 -- # ver1_l=2 00:20:04.884 04:16:04 -- scripts/common.sh@340 -- # ver2_l=1 00:20:04.884 04:16:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:04.884 04:16:04 -- scripts/common.sh@343 -- # case "$op" in 00:20:04.884 04:16:04 -- scripts/common.sh@344 -- # : 1 00:20:04.884 04:16:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:04.884 04:16:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:04.884 04:16:04 -- scripts/common.sh@364 -- # decimal 1 00:20:04.884 04:16:04 -- scripts/common.sh@352 -- # local d=1 00:20:04.884 04:16:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:04.884 04:16:04 -- scripts/common.sh@354 -- # echo 1 00:20:04.884 04:16:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:04.884 04:16:04 -- scripts/common.sh@365 -- # decimal 2 00:20:04.884 04:16:04 -- scripts/common.sh@352 -- # local d=2 00:20:04.884 04:16:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:04.884 04:16:04 -- scripts/common.sh@354 -- # echo 2 00:20:04.884 04:16:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:04.884 04:16:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:04.884 04:16:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:04.884 04:16:04 -- scripts/common.sh@367 -- # return 0 00:20:04.884 04:16:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:04.884 04:16:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:04.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.884 --rc genhtml_branch_coverage=1 00:20:04.884 --rc genhtml_function_coverage=1 00:20:04.884 --rc genhtml_legend=1 00:20:04.884 --rc geninfo_all_blocks=1 00:20:04.884 --rc geninfo_unexecuted_blocks=1 00:20:04.884 00:20:04.884 ' 00:20:04.884 04:16:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:04.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.884 --rc genhtml_branch_coverage=1 00:20:04.884 --rc genhtml_function_coverage=1 00:20:04.884 --rc genhtml_legend=1 00:20:04.884 --rc geninfo_all_blocks=1 00:20:04.884 --rc geninfo_unexecuted_blocks=1 00:20:04.884 00:20:04.884 ' 00:20:04.884 04:16:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:04.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.884 --rc genhtml_branch_coverage=1 00:20:04.884 --rc genhtml_function_coverage=1 00:20:04.884 --rc genhtml_legend=1 00:20:04.884 --rc geninfo_all_blocks=1 00:20:04.884 --rc geninfo_unexecuted_blocks=1 00:20:04.884 00:20:04.884 ' 00:20:04.884 04:16:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:04.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.884 --rc genhtml_branch_coverage=1 00:20:04.884 --rc genhtml_function_coverage=1 00:20:04.884 --rc genhtml_legend=1 00:20:04.884 --rc geninfo_all_blocks=1 00:20:04.884 --rc geninfo_unexecuted_blocks=1 00:20:04.884 00:20:04.884 ' 00:20:04.884 04:16:04 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:04.884 04:16:04 -- nvmf/common.sh@7 -- # uname -s 00:20:04.884 04:16:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.884 04:16:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.884 04:16:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.884 04:16:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.884 04:16:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.884 04:16:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.884 04:16:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.884 04:16:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.884 04:16:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.884 04:16:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.884 04:16:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:20:04.884 04:16:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:20:04.884 04:16:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.884 04:16:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.884 04:16:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:04.884 04:16:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:04.884 04:16:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.884 04:16:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.884 04:16:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.884 04:16:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.884 04:16:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.884 04:16:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.884 04:16:04 -- paths/export.sh@5 -- # export PATH 00:20:04.884 04:16:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.884 04:16:04 -- nvmf/common.sh@46 -- # : 0 00:20:04.884 04:16:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:04.884 04:16:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:04.884 04:16:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:04.884 04:16:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.884 04:16:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.884 04:16:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:04.884 04:16:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:04.884 04:16:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:04.884 04:16:04 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:04.884 04:16:04 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:04.884 04:16:04 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:04.884 04:16:04 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:04.884 04:16:04 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:04.884 04:16:04 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:04.884 04:16:04 -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:04.884 04:16:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:04.884 04:16:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.884 04:16:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:04.884 04:16:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:04.884 04:16:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:04.884 04:16:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.884 04:16:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.884 04:16:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.884 04:16:04 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:04.884 04:16:04 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:04.884 04:16:04 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:04.884 04:16:04 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:04.884 04:16:04 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:04.884 04:16:04 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:04.884 04:16:04 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:04.884 04:16:04 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:04.884 04:16:04 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:04.884 04:16:04 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:04.884 04:16:04 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:04.884 04:16:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:04.884 04:16:04 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:04.884 04:16:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:04.884 04:16:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:04.884 04:16:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:04.884 04:16:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:04.884 04:16:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:04.884 04:16:04 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:04.884 04:16:04 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:04.884 Cannot find device "nvmf_tgt_br" 00:20:04.884 04:16:04 -- nvmf/common.sh@154 -- # true 00:20:04.884 04:16:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:04.884 Cannot find device "nvmf_tgt_br2" 00:20:04.884 04:16:05 -- nvmf/common.sh@155 -- # true 00:20:04.884 04:16:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:04.884 04:16:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:04.884 Cannot find device "nvmf_tgt_br" 00:20:04.884 04:16:05 -- nvmf/common.sh@157 -- # true 00:20:04.884 04:16:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:04.884 Cannot find device "nvmf_tgt_br2" 00:20:04.885 04:16:05 -- nvmf/common.sh@158 -- # true 00:20:04.885 04:16:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:04.885 04:16:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:04.885 04:16:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:04.885 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:04.885 04:16:05 -- nvmf/common.sh@161 -- # true 00:20:04.885 04:16:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:04.885 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:04.885 04:16:05 -- nvmf/common.sh@162 -- # true 00:20:04.885 04:16:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:04.885 04:16:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:04.885 04:16:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:04.885 04:16:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:04.885 04:16:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:04.885 04:16:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:04.885 04:16:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:04.885 04:16:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:04.885 04:16:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:04.885 04:16:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:04.885 04:16:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:04.885 04:16:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:04.885 04:16:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:04.885 04:16:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:04.885 04:16:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:04.885 04:16:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:04.885 04:16:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:04.885 04:16:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:04.885 04:16:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:04.885 04:16:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:04.885 04:16:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:04.885 04:16:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:04.885 04:16:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:04.885 04:16:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:04.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:20:04.885 00:20:04.885 --- 10.0.0.2 ping statistics --- 00:20:04.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.885 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:20:04.885 04:16:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:04.885 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:04.885 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:20:04.885 00:20:04.885 --- 10.0.0.3 ping statistics --- 00:20:04.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.885 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:20:04.885 04:16:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:04.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:20:04.885 00:20:04.885 --- 10.0.0.1 ping statistics --- 00:20:04.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.885 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:04.885 04:16:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.885 04:16:05 -- nvmf/common.sh@421 -- # return 0 00:20:04.885 04:16:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:04.885 04:16:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:04.885 04:16:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:04.885 04:16:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:04.885 04:16:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:04.885 04:16:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:04.885 04:16:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:04.885 04:16:05 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:04.885 04:16:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:04.885 04:16:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:04.885 04:16:05 -- common/autotest_common.sh@10 -- # set +x 00:20:04.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.885 04:16:05 -- nvmf/common.sh@469 -- # nvmfpid=92785 00:20:04.885 04:16:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:04.885 04:16:05 -- nvmf/common.sh@470 -- # waitforlisten 92785 00:20:04.885 04:16:05 -- common/autotest_common.sh@829 -- # '[' -z 92785 ']' 00:20:04.885 04:16:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.885 04:16:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:04.885 04:16:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.885 04:16:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:04.885 04:16:05 -- common/autotest_common.sh@10 -- # set +x 00:20:04.885 [2024-11-26 04:16:05.349844] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:04.885 [2024-11-26 04:16:05.350112] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.885 [2024-11-26 04:16:05.493300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:04.885 [2024-11-26 04:16:05.578547] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:04.885 [2024-11-26 04:16:05.578978] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.885 [2024-11-26 04:16:05.579161] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.885 [2024-11-26 04:16:05.579332] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.885 [2024-11-26 04:16:05.579560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.885 [2024-11-26 04:16:05.579763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.885 [2024-11-26 04:16:05.579762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:04.885 04:16:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:04.885 04:16:06 -- common/autotest_common.sh@862 -- # return 0 00:20:04.885 04:16:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:04.885 04:16:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:04.885 04:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:04.885 04:16:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.885 04:16:06 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:04.885 04:16:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.885 04:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:04.885 [2024-11-26 04:16:06.391933] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.885 04:16:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.885 04:16:06 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:04.885 04:16:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.885 04:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:04.885 Malloc0 00:20:04.885 04:16:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.885 04:16:06 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:04.885 04:16:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.885 04:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:04.885 04:16:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.885 04:16:06 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:04.885 04:16:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.885 04:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:04.885 04:16:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.885 04:16:06 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:04.885 04:16:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.885 04:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:04.885 [2024-11-26 04:16:06.460469] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.885 04:16:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.885 04:16:06 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:04.885 04:16:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.885 04:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:04.885 [2024-11-26 04:16:06.468354] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:04.885 04:16:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.885 04:16:06 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:04.885 04:16:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.885 04:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:04.885 Malloc1 00:20:04.885 04:16:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.885 04:16:06 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:04.885 04:16:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.885 04:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:04.885 04:16:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.885 04:16:06 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:04.885 04:16:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.885 04:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:04.885 04:16:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.885 04:16:06 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:04.885 04:16:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.885 04:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:04.885 04:16:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.885 04:16:06 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:04.885 04:16:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.886 04:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:04.886 04:16:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.886 04:16:06 -- host/multicontroller.sh@44 -- # bdevperf_pid=92837 00:20:04.886 04:16:06 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:04.886 04:16:06 -- host/multicontroller.sh@47 -- # waitforlisten 92837 /var/tmp/bdevperf.sock 00:20:04.886 04:16:06 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:04.886 04:16:06 -- common/autotest_common.sh@829 -- # '[' -z 92837 ']' 00:20:04.886 04:16:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.886 04:16:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:04.886 04:16:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.886 04:16:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:04.886 04:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:05.822 04:16:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:05.822 04:16:07 -- common/autotest_common.sh@862 -- # return 0 00:20:05.822 04:16:07 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:05.822 04:16:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.822 04:16:07 -- common/autotest_common.sh@10 -- # set +x 00:20:06.081 NVMe0n1 00:20:06.081 04:16:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.081 04:16:07 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:06.081 04:16:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.081 04:16:07 -- common/autotest_common.sh@10 -- # set +x 00:20:06.081 04:16:07 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:06.081 04:16:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.081 1 00:20:06.081 04:16:07 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:06.081 04:16:07 -- common/autotest_common.sh@650 -- # local es=0 00:20:06.081 04:16:07 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:06.081 04:16:07 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:06.081 04:16:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:06.081 04:16:07 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:06.081 04:16:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:06.081 04:16:07 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:06.081 04:16:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.081 04:16:07 -- common/autotest_common.sh@10 -- # set +x 00:20:06.081 2024/11/26 04:16:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:06.081 request: 00:20:06.081 { 00:20:06.081 "method": "bdev_nvme_attach_controller", 00:20:06.081 "params": { 00:20:06.081 "name": "NVMe0", 00:20:06.081 "trtype": "tcp", 00:20:06.081 "traddr": "10.0.0.2", 00:20:06.081 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:06.081 "hostaddr": "10.0.0.2", 00:20:06.081 "hostsvcid": "60000", 00:20:06.081 "adrfam": "ipv4", 00:20:06.081 "trsvcid": "4420", 00:20:06.081 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:20:06.081 } 00:20:06.081 } 00:20:06.081 Got JSON-RPC error response 00:20:06.081 GoRPCClient: error on JSON-RPC call 00:20:06.081 04:16:07 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:06.081 04:16:07 -- common/autotest_common.sh@653 -- # es=1 00:20:06.081 04:16:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:06.081 04:16:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:06.081 04:16:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:06.081 04:16:07 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:06.081 04:16:07 -- common/autotest_common.sh@650 -- # local es=0 00:20:06.081 04:16:07 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:06.081 04:16:07 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:06.081 04:16:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:06.081 04:16:07 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:06.081 04:16:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:06.081 04:16:07 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:06.081 04:16:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.081 04:16:07 -- common/autotest_common.sh@10 -- # set +x 00:20:06.081 2024/11/26 04:16:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:06.081 request: 00:20:06.081 { 00:20:06.081 "method": "bdev_nvme_attach_controller", 00:20:06.081 "params": { 00:20:06.081 "name": "NVMe0", 00:20:06.081 "trtype": "tcp", 00:20:06.081 "traddr": "10.0.0.2", 00:20:06.082 "hostaddr": "10.0.0.2", 00:20:06.082 "hostsvcid": "60000", 00:20:06.082 "adrfam": "ipv4", 00:20:06.082 "trsvcid": "4420", 00:20:06.082 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:20:06.082 } 00:20:06.082 } 00:20:06.082 Got JSON-RPC error response 00:20:06.082 GoRPCClient: error on JSON-RPC call 00:20:06.082 04:16:07 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:06.082 04:16:07 -- common/autotest_common.sh@653 -- # es=1 00:20:06.082 04:16:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:06.082 04:16:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:06.082 04:16:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:06.082 04:16:07 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:06.082 04:16:07 -- common/autotest_common.sh@650 -- # local es=0 00:20:06.082 04:16:07 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:06.082 04:16:07 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:06.082 04:16:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:06.082 04:16:07 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:06.082 04:16:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:06.082 04:16:07 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:06.082 04:16:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.082 04:16:07 -- common/autotest_common.sh@10 -- # set +x 00:20:06.082 2024/11/26 04:16:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:20:06.082 request: 00:20:06.082 { 00:20:06.082 "method": "bdev_nvme_attach_controller", 00:20:06.082 "params": { 00:20:06.082 "name": "NVMe0", 00:20:06.082 "trtype": "tcp", 00:20:06.082 "traddr": "10.0.0.2", 00:20:06.082 "hostaddr": "10.0.0.2", 00:20:06.082 "hostsvcid": "60000", 00:20:06.082 "adrfam": "ipv4", 00:20:06.082 "trsvcid": "4420", 00:20:06.082 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.082 "multipath": "disable" 00:20:06.082 } 00:20:06.082 } 00:20:06.082 Got JSON-RPC error response 00:20:06.082 GoRPCClient: error on JSON-RPC call 00:20:06.082 04:16:07 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:06.082 04:16:07 -- common/autotest_common.sh@653 -- # es=1 00:20:06.082 04:16:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:06.082 04:16:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:06.082 04:16:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:06.082 04:16:07 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:06.082 04:16:07 -- common/autotest_common.sh@650 -- # local es=0 00:20:06.082 04:16:07 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:06.082 04:16:07 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:06.082 04:16:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:06.082 04:16:07 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:06.082 04:16:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:06.082 04:16:07 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:06.082 04:16:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.082 04:16:07 -- common/autotest_common.sh@10 -- # set +x 00:20:06.082 2024/11/26 04:16:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:06.082 request: 00:20:06.082 { 00:20:06.082 "method": "bdev_nvme_attach_controller", 00:20:06.082 "params": { 00:20:06.082 "name": "NVMe0", 00:20:06.082 "trtype": "tcp", 00:20:06.082 "traddr": "10.0.0.2", 00:20:06.082 "hostaddr": "10.0.0.2", 00:20:06.082 "hostsvcid": "60000", 00:20:06.082 "adrfam": "ipv4", 00:20:06.082 "trsvcid": "4420", 00:20:06.082 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.082 "multipath": "failover" 00:20:06.082 } 00:20:06.082 } 00:20:06.082 Got JSON-RPC error response 00:20:06.082 GoRPCClient: error on JSON-RPC call 00:20:06.082 04:16:07 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:06.082 04:16:07 -- common/autotest_common.sh@653 -- # es=1 00:20:06.082 04:16:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:06.082 04:16:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:06.082 04:16:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:06.082 04:16:07 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:06.082 04:16:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.082 04:16:07 -- common/autotest_common.sh@10 -- # set +x 00:20:06.082 00:20:06.082 04:16:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.082 04:16:07 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:06.082 04:16:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.082 04:16:07 -- common/autotest_common.sh@10 -- # set +x 00:20:06.082 04:16:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.082 04:16:07 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:06.082 04:16:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.082 04:16:07 -- common/autotest_common.sh@10 -- # set +x 00:20:06.341 00:20:06.341 04:16:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.341 04:16:07 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:06.341 04:16:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.341 04:16:07 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:06.341 04:16:07 -- common/autotest_common.sh@10 -- # set +x 00:20:06.341 04:16:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.341 04:16:07 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:06.341 04:16:07 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:07.290 0 00:20:07.290 04:16:09 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:07.290 04:16:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.290 04:16:09 -- common/autotest_common.sh@10 -- # set +x 00:20:07.290 04:16:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.290 04:16:09 -- host/multicontroller.sh@100 -- # killprocess 92837 00:20:07.290 04:16:09 -- common/autotest_common.sh@936 -- # '[' -z 92837 ']' 00:20:07.290 04:16:09 -- common/autotest_common.sh@940 -- # kill -0 92837 00:20:07.290 04:16:09 -- common/autotest_common.sh@941 -- # uname 00:20:07.290 04:16:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:07.549 04:16:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92837 00:20:07.549 04:16:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:07.549 04:16:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:07.549 killing process with pid 92837 00:20:07.549 04:16:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92837' 00:20:07.549 04:16:09 -- common/autotest_common.sh@955 -- # kill 92837 00:20:07.549 04:16:09 -- common/autotest_common.sh@960 -- # wait 92837 00:20:07.549 04:16:09 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:07.549 04:16:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.549 04:16:09 -- common/autotest_common.sh@10 -- # set +x 00:20:07.549 04:16:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.549 04:16:09 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:07.549 04:16:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.808 04:16:09 -- common/autotest_common.sh@10 -- # set +x 00:20:07.808 04:16:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.808 04:16:09 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:07.808 04:16:09 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:07.808 04:16:09 -- common/autotest_common.sh@1607 -- # read -r file 00:20:07.808 04:16:09 -- common/autotest_common.sh@1606 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:20:07.808 04:16:09 -- common/autotest_common.sh@1606 -- # sort -u 00:20:07.808 04:16:09 -- common/autotest_common.sh@1608 -- # cat 00:20:07.808 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:07.808 [2024-11-26 04:16:06.580204] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:07.808 [2024-11-26 04:16:06.580301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92837 ] 00:20:07.808 [2024-11-26 04:16:06.715884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.808 [2024-11-26 04:16:06.791829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.808 [2024-11-26 04:16:07.852175] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 4b1dc47c-f371-4006-aa9d-d46d58cfff4f already exists 00:20:07.808 [2024-11-26 04:16:07.852222] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:4b1dc47c-f371-4006-aa9d-d46d58cfff4f alias for bdev NVMe1n1 00:20:07.808 [2024-11-26 04:16:07.852256] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:07.808 Running I/O for 1 seconds... 00:20:07.808 00:20:07.808 Latency(us) 00:20:07.808 [2024-11-26T04:16:09.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.808 [2024-11-26T04:16:09.576Z] Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:07.808 NVMe0n1 : 1.00 24869.79 97.15 0.00 0.00 5134.78 1936.29 9234.62 00:20:07.808 [2024-11-26T04:16:09.576Z] =================================================================================================================== 00:20:07.808 [2024-11-26T04:16:09.576Z] Total : 24869.79 97.15 0.00 0.00 5134.78 1936.29 9234.62 00:20:07.808 Received shutdown signal, test time was about 1.000000 seconds 00:20:07.808 00:20:07.808 Latency(us) 00:20:07.808 [2024-11-26T04:16:09.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.808 [2024-11-26T04:16:09.576Z] =================================================================================================================== 00:20:07.808 [2024-11-26T04:16:09.576Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:07.808 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:07.808 04:16:09 -- common/autotest_common.sh@1613 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:07.808 04:16:09 -- common/autotest_common.sh@1607 -- # read -r file 00:20:07.808 04:16:09 -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:07.808 04:16:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:07.808 04:16:09 -- nvmf/common.sh@116 -- # sync 00:20:07.808 04:16:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:07.808 04:16:09 -- nvmf/common.sh@119 -- # set +e 00:20:07.808 04:16:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:07.808 04:16:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:07.808 rmmod nvme_tcp 00:20:07.808 rmmod nvme_fabrics 00:20:07.808 rmmod nvme_keyring 00:20:07.808 04:16:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:07.808 04:16:09 -- nvmf/common.sh@123 -- # set -e 00:20:07.808 04:16:09 -- nvmf/common.sh@124 -- # return 0 00:20:07.808 04:16:09 -- nvmf/common.sh@477 -- # '[' -n 92785 ']' 00:20:07.808 04:16:09 -- nvmf/common.sh@478 -- # killprocess 92785 00:20:07.808 04:16:09 -- common/autotest_common.sh@936 -- # '[' -z 92785 ']' 00:20:07.808 04:16:09 -- common/autotest_common.sh@940 -- # kill -0 92785 00:20:07.808 04:16:09 -- common/autotest_common.sh@941 -- # uname 00:20:07.808 04:16:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:07.808 04:16:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92785 00:20:07.808 killing process with pid 92785 00:20:07.808 04:16:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:07.808 04:16:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:07.808 04:16:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92785' 00:20:07.808 04:16:09 -- common/autotest_common.sh@955 -- # kill 92785 00:20:07.808 04:16:09 -- common/autotest_common.sh@960 -- # wait 92785 00:20:08.067 04:16:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:08.067 04:16:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:08.067 04:16:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:08.067 04:16:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:08.067 04:16:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:08.067 04:16:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.067 04:16:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.067 04:16:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.326 04:16:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:08.326 ************************************ 00:20:08.326 END TEST nvmf_multicontroller 00:20:08.326 ************************************ 00:20:08.326 00:20:08.326 real 0m5.119s 00:20:08.326 user 0m15.845s 00:20:08.326 sys 0m1.175s 00:20:08.326 04:16:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:08.326 04:16:09 -- common/autotest_common.sh@10 -- # set +x 00:20:08.326 04:16:09 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:08.326 04:16:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:08.326 04:16:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:08.326 04:16:09 -- common/autotest_common.sh@10 -- # set +x 00:20:08.326 ************************************ 00:20:08.326 START TEST nvmf_aer 00:20:08.326 ************************************ 00:20:08.326 04:16:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:08.326 * Looking for test storage... 00:20:08.326 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:08.326 04:16:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:08.326 04:16:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:08.326 04:16:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:08.326 04:16:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:08.326 04:16:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:08.326 04:16:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:08.326 04:16:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:08.326 04:16:10 -- scripts/common.sh@335 -- # IFS=.-: 00:20:08.326 04:16:10 -- scripts/common.sh@335 -- # read -ra ver1 00:20:08.326 04:16:10 -- scripts/common.sh@336 -- # IFS=.-: 00:20:08.326 04:16:10 -- scripts/common.sh@336 -- # read -ra ver2 00:20:08.326 04:16:10 -- scripts/common.sh@337 -- # local 'op=<' 00:20:08.326 04:16:10 -- scripts/common.sh@339 -- # ver1_l=2 00:20:08.326 04:16:10 -- scripts/common.sh@340 -- # ver2_l=1 00:20:08.326 04:16:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:08.326 04:16:10 -- scripts/common.sh@343 -- # case "$op" in 00:20:08.326 04:16:10 -- scripts/common.sh@344 -- # : 1 00:20:08.326 04:16:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:08.326 04:16:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:08.326 04:16:10 -- scripts/common.sh@364 -- # decimal 1 00:20:08.326 04:16:10 -- scripts/common.sh@352 -- # local d=1 00:20:08.326 04:16:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:08.326 04:16:10 -- scripts/common.sh@354 -- # echo 1 00:20:08.326 04:16:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:08.326 04:16:10 -- scripts/common.sh@365 -- # decimal 2 00:20:08.326 04:16:10 -- scripts/common.sh@352 -- # local d=2 00:20:08.326 04:16:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:08.326 04:16:10 -- scripts/common.sh@354 -- # echo 2 00:20:08.326 04:16:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:08.585 04:16:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:08.585 04:16:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:08.585 04:16:10 -- scripts/common.sh@367 -- # return 0 00:20:08.585 04:16:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:08.585 04:16:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:08.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.585 --rc genhtml_branch_coverage=1 00:20:08.586 --rc genhtml_function_coverage=1 00:20:08.586 --rc genhtml_legend=1 00:20:08.586 --rc geninfo_all_blocks=1 00:20:08.586 --rc geninfo_unexecuted_blocks=1 00:20:08.586 00:20:08.586 ' 00:20:08.586 04:16:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:08.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.586 --rc genhtml_branch_coverage=1 00:20:08.586 --rc genhtml_function_coverage=1 00:20:08.586 --rc genhtml_legend=1 00:20:08.586 --rc geninfo_all_blocks=1 00:20:08.586 --rc geninfo_unexecuted_blocks=1 00:20:08.586 00:20:08.586 ' 00:20:08.586 04:16:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:08.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.586 --rc genhtml_branch_coverage=1 00:20:08.586 --rc genhtml_function_coverage=1 00:20:08.586 --rc genhtml_legend=1 00:20:08.586 --rc geninfo_all_blocks=1 00:20:08.586 --rc geninfo_unexecuted_blocks=1 00:20:08.586 00:20:08.586 ' 00:20:08.586 04:16:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:08.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.586 --rc genhtml_branch_coverage=1 00:20:08.586 --rc genhtml_function_coverage=1 00:20:08.586 --rc genhtml_legend=1 00:20:08.586 --rc geninfo_all_blocks=1 00:20:08.586 --rc geninfo_unexecuted_blocks=1 00:20:08.586 00:20:08.586 ' 00:20:08.586 04:16:10 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:08.586 04:16:10 -- nvmf/common.sh@7 -- # uname -s 00:20:08.586 04:16:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:08.586 04:16:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:08.586 04:16:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:08.586 04:16:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:08.586 04:16:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:08.586 04:16:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:08.586 04:16:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:08.586 04:16:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:08.586 04:16:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:08.586 04:16:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:08.586 04:16:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:20:08.586 04:16:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:20:08.586 04:16:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:08.586 04:16:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:08.586 04:16:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:08.586 04:16:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:08.586 04:16:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:08.586 04:16:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:08.586 04:16:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:08.586 04:16:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.586 04:16:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.586 04:16:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.586 04:16:10 -- paths/export.sh@5 -- # export PATH 00:20:08.586 04:16:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.586 04:16:10 -- nvmf/common.sh@46 -- # : 0 00:20:08.586 04:16:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:08.586 04:16:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:08.586 04:16:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:08.586 04:16:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:08.586 04:16:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:08.586 04:16:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:08.586 04:16:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:08.586 04:16:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:08.586 04:16:10 -- host/aer.sh@11 -- # nvmftestinit 00:20:08.586 04:16:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:08.586 04:16:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:08.586 04:16:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:08.586 04:16:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:08.586 04:16:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:08.586 04:16:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.586 04:16:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.586 04:16:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.586 04:16:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:08.586 04:16:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:08.586 04:16:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:08.586 04:16:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:08.586 04:16:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:08.586 04:16:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:08.586 04:16:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.586 04:16:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.586 04:16:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:08.586 04:16:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:08.586 04:16:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:08.586 04:16:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:08.586 04:16:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:08.586 04:16:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.586 04:16:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:08.586 04:16:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:08.586 04:16:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:08.586 04:16:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:08.586 04:16:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:08.586 04:16:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:08.586 Cannot find device "nvmf_tgt_br" 00:20:08.586 04:16:10 -- nvmf/common.sh@154 -- # true 00:20:08.586 04:16:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:08.586 Cannot find device "nvmf_tgt_br2" 00:20:08.586 04:16:10 -- nvmf/common.sh@155 -- # true 00:20:08.586 04:16:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:08.586 04:16:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:08.586 Cannot find device "nvmf_tgt_br" 00:20:08.586 04:16:10 -- nvmf/common.sh@157 -- # true 00:20:08.586 04:16:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:08.586 Cannot find device "nvmf_tgt_br2" 00:20:08.586 04:16:10 -- nvmf/common.sh@158 -- # true 00:20:08.586 04:16:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:08.586 04:16:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:08.586 04:16:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:08.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:08.586 04:16:10 -- nvmf/common.sh@161 -- # true 00:20:08.586 04:16:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:08.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:08.586 04:16:10 -- nvmf/common.sh@162 -- # true 00:20:08.586 04:16:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:08.586 04:16:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:08.586 04:16:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:08.586 04:16:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:08.586 04:16:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:08.586 04:16:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:08.586 04:16:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:08.586 04:16:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:08.586 04:16:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:08.586 04:16:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:08.586 04:16:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:08.845 04:16:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:08.845 04:16:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:08.845 04:16:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:08.845 04:16:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:08.845 04:16:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:08.845 04:16:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:08.845 04:16:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:08.845 04:16:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:08.845 04:16:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:08.845 04:16:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:08.845 04:16:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:08.845 04:16:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:08.845 04:16:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:08.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:20:08.845 00:20:08.845 --- 10.0.0.2 ping statistics --- 00:20:08.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.845 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:20:08.845 04:16:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:08.845 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:08.845 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:20:08.845 00:20:08.845 --- 10.0.0.3 ping statistics --- 00:20:08.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.845 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:08.845 04:16:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:08.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:20:08.845 00:20:08.845 --- 10.0.0.1 ping statistics --- 00:20:08.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.845 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:08.845 04:16:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.845 04:16:10 -- nvmf/common.sh@421 -- # return 0 00:20:08.845 04:16:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:08.845 04:16:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.845 04:16:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:08.845 04:16:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:08.845 04:16:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.845 04:16:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:08.845 04:16:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:08.845 04:16:10 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:08.845 04:16:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:08.845 04:16:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:08.845 04:16:10 -- common/autotest_common.sh@10 -- # set +x 00:20:08.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.845 04:16:10 -- nvmf/common.sh@469 -- # nvmfpid=93097 00:20:08.845 04:16:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:08.845 04:16:10 -- nvmf/common.sh@470 -- # waitforlisten 93097 00:20:08.845 04:16:10 -- common/autotest_common.sh@829 -- # '[' -z 93097 ']' 00:20:08.845 04:16:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.845 04:16:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:08.845 04:16:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.845 04:16:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:08.845 04:16:10 -- common/autotest_common.sh@10 -- # set +x 00:20:08.845 [2024-11-26 04:16:10.526238] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:08.845 [2024-11-26 04:16:10.526384] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.112 [2024-11-26 04:16:10.663306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:09.112 [2024-11-26 04:16:10.737833] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:09.112 [2024-11-26 04:16:10.738370] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.112 [2024-11-26 04:16:10.738436] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.112 [2024-11-26 04:16:10.738679] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.112 [2024-11-26 04:16:10.738820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.112 [2024-11-26 04:16:10.738921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.112 [2024-11-26 04:16:10.739601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:09.112 [2024-11-26 04:16:10.739637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.052 04:16:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:10.052 04:16:11 -- common/autotest_common.sh@862 -- # return 0 00:20:10.052 04:16:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:10.052 04:16:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:10.052 04:16:11 -- common/autotest_common.sh@10 -- # set +x 00:20:10.052 04:16:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.052 04:16:11 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:10.052 04:16:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.052 04:16:11 -- common/autotest_common.sh@10 -- # set +x 00:20:10.052 [2024-11-26 04:16:11.569554] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.052 04:16:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.052 04:16:11 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:10.052 04:16:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.052 04:16:11 -- common/autotest_common.sh@10 -- # set +x 00:20:10.052 Malloc0 00:20:10.052 04:16:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.052 04:16:11 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:10.052 04:16:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.052 04:16:11 -- common/autotest_common.sh@10 -- # set +x 00:20:10.052 04:16:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.052 04:16:11 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:10.052 04:16:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.052 04:16:11 -- common/autotest_common.sh@10 -- # set +x 00:20:10.052 04:16:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.053 04:16:11 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:10.053 04:16:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.053 04:16:11 -- common/autotest_common.sh@10 -- # set +x 00:20:10.053 [2024-11-26 04:16:11.645507] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.053 04:16:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.053 04:16:11 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:10.053 04:16:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.053 04:16:11 -- common/autotest_common.sh@10 -- # set +x 00:20:10.053 [2024-11-26 04:16:11.653239] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:10.053 [ 00:20:10.053 { 00:20:10.053 "allow_any_host": true, 00:20:10.053 "hosts": [], 00:20:10.053 "listen_addresses": [], 00:20:10.053 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:10.053 "subtype": "Discovery" 00:20:10.053 }, 00:20:10.053 { 00:20:10.053 "allow_any_host": true, 00:20:10.053 "hosts": [], 00:20:10.053 "listen_addresses": [ 00:20:10.053 { 00:20:10.053 "adrfam": "IPv4", 00:20:10.053 "traddr": "10.0.0.2", 00:20:10.053 "transport": "TCP", 00:20:10.053 "trsvcid": "4420", 00:20:10.053 "trtype": "TCP" 00:20:10.053 } 00:20:10.053 ], 00:20:10.053 "max_cntlid": 65519, 00:20:10.053 "max_namespaces": 2, 00:20:10.053 "min_cntlid": 1, 00:20:10.053 "model_number": "SPDK bdev Controller", 00:20:10.053 "namespaces": [ 00:20:10.053 { 00:20:10.053 "bdev_name": "Malloc0", 00:20:10.053 "name": "Malloc0", 00:20:10.053 "nguid": "3019E8EFB6694A38B73E0DACEB5D3CFD", 00:20:10.053 "nsid": 1, 00:20:10.053 "uuid": "3019e8ef-b669-4a38-b73e-0daceb5d3cfd" 00:20:10.053 } 00:20:10.053 ], 00:20:10.053 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.053 "serial_number": "SPDK00000000000001", 00:20:10.053 "subtype": "NVMe" 00:20:10.053 } 00:20:10.053 ] 00:20:10.053 04:16:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.053 04:16:11 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:10.053 04:16:11 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:10.053 04:16:11 -- host/aer.sh@33 -- # aerpid=93152 00:20:10.053 04:16:11 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:10.053 04:16:11 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:10.053 04:16:11 -- common/autotest_common.sh@1254 -- # local i=0 00:20:10.053 04:16:11 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:10.053 04:16:11 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:20:10.053 04:16:11 -- common/autotest_common.sh@1257 -- # i=1 00:20:10.053 04:16:11 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:10.053 04:16:11 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:10.053 04:16:11 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:20:10.053 04:16:11 -- common/autotest_common.sh@1257 -- # i=2 00:20:10.053 04:16:11 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:10.312 04:16:11 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:10.312 04:16:11 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:10.312 04:16:11 -- common/autotest_common.sh@1265 -- # return 0 00:20:10.312 04:16:11 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:10.312 04:16:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.312 04:16:11 -- common/autotest_common.sh@10 -- # set +x 00:20:10.312 Malloc1 00:20:10.312 04:16:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.312 04:16:11 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:10.312 04:16:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.312 04:16:11 -- common/autotest_common.sh@10 -- # set +x 00:20:10.312 04:16:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.312 04:16:11 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:10.312 04:16:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.312 04:16:11 -- common/autotest_common.sh@10 -- # set +x 00:20:10.312 Asynchronous Event Request test 00:20:10.312 Attaching to 10.0.0.2 00:20:10.312 Attached to 10.0.0.2 00:20:10.312 Registering asynchronous event callbacks... 00:20:10.312 Starting namespace attribute notice tests for all controllers... 00:20:10.312 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:10.312 aer_cb - Changed Namespace 00:20:10.312 Cleaning up... 00:20:10.312 [ 00:20:10.312 { 00:20:10.312 "allow_any_host": true, 00:20:10.312 "hosts": [], 00:20:10.312 "listen_addresses": [], 00:20:10.312 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:10.312 "subtype": "Discovery" 00:20:10.312 }, 00:20:10.312 { 00:20:10.312 "allow_any_host": true, 00:20:10.312 "hosts": [], 00:20:10.312 "listen_addresses": [ 00:20:10.312 { 00:20:10.312 "adrfam": "IPv4", 00:20:10.312 "traddr": "10.0.0.2", 00:20:10.312 "transport": "TCP", 00:20:10.312 "trsvcid": "4420", 00:20:10.312 "trtype": "TCP" 00:20:10.312 } 00:20:10.312 ], 00:20:10.312 "max_cntlid": 65519, 00:20:10.312 "max_namespaces": 2, 00:20:10.312 "min_cntlid": 1, 00:20:10.312 "model_number": "SPDK bdev Controller", 00:20:10.312 "namespaces": [ 00:20:10.312 { 00:20:10.312 "bdev_name": "Malloc0", 00:20:10.312 "name": "Malloc0", 00:20:10.312 "nguid": "3019E8EFB6694A38B73E0DACEB5D3CFD", 00:20:10.312 "nsid": 1, 00:20:10.312 "uuid": "3019e8ef-b669-4a38-b73e-0daceb5d3cfd" 00:20:10.312 }, 00:20:10.312 { 00:20:10.312 "bdev_name": "Malloc1", 00:20:10.312 "name": "Malloc1", 00:20:10.312 "nguid": "AA155D5D0B6945959CB4351D40F567E3", 00:20:10.312 "nsid": 2, 00:20:10.312 "uuid": "aa155d5d-0b69-4595-9cb4-351d40f567e3" 00:20:10.312 } 00:20:10.312 ], 00:20:10.312 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.312 "serial_number": "SPDK00000000000001", 00:20:10.312 "subtype": "NVMe" 00:20:10.312 } 00:20:10.312 ] 00:20:10.312 04:16:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.312 04:16:11 -- host/aer.sh@43 -- # wait 93152 00:20:10.312 04:16:11 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:10.312 04:16:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.312 04:16:11 -- common/autotest_common.sh@10 -- # set +x 00:20:10.312 04:16:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.312 04:16:11 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:10.312 04:16:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.312 04:16:11 -- common/autotest_common.sh@10 -- # set +x 00:20:10.312 04:16:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.312 04:16:12 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:10.312 04:16:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.312 04:16:12 -- common/autotest_common.sh@10 -- # set +x 00:20:10.312 04:16:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.312 04:16:12 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:10.312 04:16:12 -- host/aer.sh@51 -- # nvmftestfini 00:20:10.312 04:16:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:10.312 04:16:12 -- nvmf/common.sh@116 -- # sync 00:20:10.571 04:16:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:10.571 04:16:12 -- nvmf/common.sh@119 -- # set +e 00:20:10.571 04:16:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:10.571 04:16:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:10.571 rmmod nvme_tcp 00:20:10.571 rmmod nvme_fabrics 00:20:10.571 rmmod nvme_keyring 00:20:10.571 04:16:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:10.571 04:16:12 -- nvmf/common.sh@123 -- # set -e 00:20:10.571 04:16:12 -- nvmf/common.sh@124 -- # return 0 00:20:10.571 04:16:12 -- nvmf/common.sh@477 -- # '[' -n 93097 ']' 00:20:10.571 04:16:12 -- nvmf/common.sh@478 -- # killprocess 93097 00:20:10.571 04:16:12 -- common/autotest_common.sh@936 -- # '[' -z 93097 ']' 00:20:10.571 04:16:12 -- common/autotest_common.sh@940 -- # kill -0 93097 00:20:10.571 04:16:12 -- common/autotest_common.sh@941 -- # uname 00:20:10.571 04:16:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:10.571 04:16:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93097 00:20:10.571 killing process with pid 93097 00:20:10.571 04:16:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:10.571 04:16:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:10.571 04:16:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93097' 00:20:10.571 04:16:12 -- common/autotest_common.sh@955 -- # kill 93097 00:20:10.571 [2024-11-26 04:16:12.182485] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:10.571 04:16:12 -- common/autotest_common.sh@960 -- # wait 93097 00:20:10.829 04:16:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:10.829 04:16:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:10.829 04:16:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:10.829 04:16:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:10.829 04:16:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:10.829 04:16:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.829 04:16:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.829 04:16:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.829 04:16:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:10.829 00:20:10.829 real 0m2.526s 00:20:10.829 user 0m6.897s 00:20:10.829 sys 0m0.671s 00:20:10.830 04:16:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:10.830 ************************************ 00:20:10.830 END TEST nvmf_aer 00:20:10.830 04:16:12 -- common/autotest_common.sh@10 -- # set +x 00:20:10.830 ************************************ 00:20:10.830 04:16:12 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:10.830 04:16:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:10.830 04:16:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:10.830 04:16:12 -- common/autotest_common.sh@10 -- # set +x 00:20:10.830 ************************************ 00:20:10.830 START TEST nvmf_async_init 00:20:10.830 ************************************ 00:20:10.830 04:16:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:10.830 * Looking for test storage... 00:20:10.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:10.830 04:16:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:10.830 04:16:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:10.830 04:16:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:11.089 04:16:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:11.089 04:16:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:11.089 04:16:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:11.089 04:16:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:11.089 04:16:12 -- scripts/common.sh@335 -- # IFS=.-: 00:20:11.089 04:16:12 -- scripts/common.sh@335 -- # read -ra ver1 00:20:11.089 04:16:12 -- scripts/common.sh@336 -- # IFS=.-: 00:20:11.089 04:16:12 -- scripts/common.sh@336 -- # read -ra ver2 00:20:11.089 04:16:12 -- scripts/common.sh@337 -- # local 'op=<' 00:20:11.089 04:16:12 -- scripts/common.sh@339 -- # ver1_l=2 00:20:11.089 04:16:12 -- scripts/common.sh@340 -- # ver2_l=1 00:20:11.089 04:16:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:11.089 04:16:12 -- scripts/common.sh@343 -- # case "$op" in 00:20:11.089 04:16:12 -- scripts/common.sh@344 -- # : 1 00:20:11.089 04:16:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:11.089 04:16:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:11.089 04:16:12 -- scripts/common.sh@364 -- # decimal 1 00:20:11.089 04:16:12 -- scripts/common.sh@352 -- # local d=1 00:20:11.089 04:16:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:11.089 04:16:12 -- scripts/common.sh@354 -- # echo 1 00:20:11.089 04:16:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:11.089 04:16:12 -- scripts/common.sh@365 -- # decimal 2 00:20:11.089 04:16:12 -- scripts/common.sh@352 -- # local d=2 00:20:11.089 04:16:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:11.089 04:16:12 -- scripts/common.sh@354 -- # echo 2 00:20:11.089 04:16:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:11.089 04:16:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:11.089 04:16:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:11.089 04:16:12 -- scripts/common.sh@367 -- # return 0 00:20:11.089 04:16:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:11.089 04:16:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:11.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.089 --rc genhtml_branch_coverage=1 00:20:11.089 --rc genhtml_function_coverage=1 00:20:11.089 --rc genhtml_legend=1 00:20:11.089 --rc geninfo_all_blocks=1 00:20:11.089 --rc geninfo_unexecuted_blocks=1 00:20:11.089 00:20:11.089 ' 00:20:11.089 04:16:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:11.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.089 --rc genhtml_branch_coverage=1 00:20:11.089 --rc genhtml_function_coverage=1 00:20:11.089 --rc genhtml_legend=1 00:20:11.089 --rc geninfo_all_blocks=1 00:20:11.089 --rc geninfo_unexecuted_blocks=1 00:20:11.089 00:20:11.089 ' 00:20:11.089 04:16:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:11.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.089 --rc genhtml_branch_coverage=1 00:20:11.089 --rc genhtml_function_coverage=1 00:20:11.089 --rc genhtml_legend=1 00:20:11.089 --rc geninfo_all_blocks=1 00:20:11.089 --rc geninfo_unexecuted_blocks=1 00:20:11.089 00:20:11.089 ' 00:20:11.089 04:16:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:11.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.089 --rc genhtml_branch_coverage=1 00:20:11.089 --rc genhtml_function_coverage=1 00:20:11.089 --rc genhtml_legend=1 00:20:11.089 --rc geninfo_all_blocks=1 00:20:11.089 --rc geninfo_unexecuted_blocks=1 00:20:11.089 00:20:11.089 ' 00:20:11.089 04:16:12 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:11.089 04:16:12 -- nvmf/common.sh@7 -- # uname -s 00:20:11.089 04:16:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:11.089 04:16:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:11.089 04:16:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:11.089 04:16:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:11.089 04:16:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:11.089 04:16:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:11.089 04:16:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:11.089 04:16:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:11.089 04:16:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:11.089 04:16:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:11.089 04:16:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:20:11.089 04:16:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:20:11.089 04:16:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:11.089 04:16:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:11.089 04:16:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:11.089 04:16:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:11.089 04:16:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:11.089 04:16:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:11.089 04:16:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:11.089 04:16:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.089 04:16:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.089 04:16:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.089 04:16:12 -- paths/export.sh@5 -- # export PATH 00:20:11.089 04:16:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.089 04:16:12 -- nvmf/common.sh@46 -- # : 0 00:20:11.089 04:16:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:11.089 04:16:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:11.089 04:16:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:11.089 04:16:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:11.089 04:16:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:11.089 04:16:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:11.089 04:16:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:11.089 04:16:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:11.089 04:16:12 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:11.089 04:16:12 -- host/async_init.sh@14 -- # null_block_size=512 00:20:11.089 04:16:12 -- host/async_init.sh@15 -- # null_bdev=null0 00:20:11.089 04:16:12 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:11.089 04:16:12 -- host/async_init.sh@20 -- # uuidgen 00:20:11.089 04:16:12 -- host/async_init.sh@20 -- # tr -d - 00:20:11.089 04:16:12 -- host/async_init.sh@20 -- # nguid=5e631b9e263943fbbee408029d349333 00:20:11.089 04:16:12 -- host/async_init.sh@22 -- # nvmftestinit 00:20:11.089 04:16:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:11.089 04:16:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:11.089 04:16:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:11.089 04:16:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:11.089 04:16:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:11.089 04:16:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.089 04:16:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:11.089 04:16:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.089 04:16:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:11.089 04:16:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:11.089 04:16:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:11.089 04:16:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:11.089 04:16:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:11.089 04:16:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:11.089 04:16:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.089 04:16:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:11.089 04:16:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:11.089 04:16:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:11.089 04:16:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:11.089 04:16:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:11.089 04:16:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:11.089 04:16:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.089 04:16:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:11.089 04:16:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:11.089 04:16:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:11.089 04:16:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:11.089 04:16:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:11.089 04:16:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:11.089 Cannot find device "nvmf_tgt_br" 00:20:11.089 04:16:12 -- nvmf/common.sh@154 -- # true 00:20:11.089 04:16:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:11.089 Cannot find device "nvmf_tgt_br2" 00:20:11.090 04:16:12 -- nvmf/common.sh@155 -- # true 00:20:11.090 04:16:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:11.090 04:16:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:11.090 Cannot find device "nvmf_tgt_br" 00:20:11.090 04:16:12 -- nvmf/common.sh@157 -- # true 00:20:11.090 04:16:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:11.090 Cannot find device "nvmf_tgt_br2" 00:20:11.090 04:16:12 -- nvmf/common.sh@158 -- # true 00:20:11.090 04:16:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:11.090 04:16:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:11.090 04:16:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:11.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:11.090 04:16:12 -- nvmf/common.sh@161 -- # true 00:20:11.090 04:16:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:11.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:11.090 04:16:12 -- nvmf/common.sh@162 -- # true 00:20:11.090 04:16:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:11.090 04:16:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:11.090 04:16:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:11.348 04:16:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:11.348 04:16:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:11.348 04:16:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:11.348 04:16:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:11.348 04:16:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:11.348 04:16:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:11.348 04:16:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:11.348 04:16:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:11.348 04:16:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:11.348 04:16:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:11.348 04:16:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:11.348 04:16:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:11.348 04:16:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:11.348 04:16:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:11.348 04:16:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:11.348 04:16:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:11.348 04:16:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:11.348 04:16:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:11.348 04:16:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:11.348 04:16:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:11.348 04:16:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:11.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:20:11.348 00:20:11.348 --- 10.0.0.2 ping statistics --- 00:20:11.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.349 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:20:11.349 04:16:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:11.349 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:11.349 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:20:11.349 00:20:11.349 --- 10.0.0.3 ping statistics --- 00:20:11.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.349 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:20:11.349 04:16:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:11.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:20:11.349 00:20:11.349 --- 10.0.0.1 ping statistics --- 00:20:11.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.349 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:20:11.349 04:16:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.349 04:16:13 -- nvmf/common.sh@421 -- # return 0 00:20:11.349 04:16:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:11.349 04:16:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.349 04:16:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:11.349 04:16:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:11.349 04:16:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.349 04:16:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:11.349 04:16:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:11.349 04:16:13 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:11.349 04:16:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:11.349 04:16:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:11.349 04:16:13 -- common/autotest_common.sh@10 -- # set +x 00:20:11.349 04:16:13 -- nvmf/common.sh@469 -- # nvmfpid=93339 00:20:11.349 04:16:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:11.349 04:16:13 -- nvmf/common.sh@470 -- # waitforlisten 93339 00:20:11.349 04:16:13 -- common/autotest_common.sh@829 -- # '[' -z 93339 ']' 00:20:11.349 04:16:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.349 04:16:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:11.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.349 04:16:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.349 04:16:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:11.349 04:16:13 -- common/autotest_common.sh@10 -- # set +x 00:20:11.607 [2024-11-26 04:16:13.138623] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:11.607 [2024-11-26 04:16:13.138733] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.607 [2024-11-26 04:16:13.276418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.607 [2024-11-26 04:16:13.359329] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:11.607 [2024-11-26 04:16:13.359519] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.607 [2024-11-26 04:16:13.359536] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.607 [2024-11-26 04:16:13.359547] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.607 [2024-11-26 04:16:13.359589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.543 04:16:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:12.543 04:16:14 -- common/autotest_common.sh@862 -- # return 0 00:20:12.543 04:16:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:12.543 04:16:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:12.543 04:16:14 -- common/autotest_common.sh@10 -- # set +x 00:20:12.543 04:16:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.543 04:16:14 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:12.543 04:16:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.543 04:16:14 -- common/autotest_common.sh@10 -- # set +x 00:20:12.543 [2024-11-26 04:16:14.227315] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.543 04:16:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.543 04:16:14 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:12.543 04:16:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.543 04:16:14 -- common/autotest_common.sh@10 -- # set +x 00:20:12.543 null0 00:20:12.543 04:16:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.543 04:16:14 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:12.543 04:16:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.543 04:16:14 -- common/autotest_common.sh@10 -- # set +x 00:20:12.543 04:16:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.543 04:16:14 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:12.543 04:16:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.543 04:16:14 -- common/autotest_common.sh@10 -- # set +x 00:20:12.543 04:16:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.543 04:16:14 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5e631b9e263943fbbee408029d349333 00:20:12.543 04:16:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.543 04:16:14 -- common/autotest_common.sh@10 -- # set +x 00:20:12.543 04:16:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.543 04:16:14 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:12.543 04:16:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.543 04:16:14 -- common/autotest_common.sh@10 -- # set +x 00:20:12.543 [2024-11-26 04:16:14.267445] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.543 04:16:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.543 04:16:14 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:12.543 04:16:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.543 04:16:14 -- common/autotest_common.sh@10 -- # set +x 00:20:12.802 nvme0n1 00:20:12.802 04:16:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.802 04:16:14 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:12.802 04:16:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.802 04:16:14 -- common/autotest_common.sh@10 -- # set +x 00:20:12.802 [ 00:20:12.802 { 00:20:12.802 "aliases": [ 00:20:12.802 "5e631b9e-2639-43fb-bee4-08029d349333" 00:20:12.802 ], 00:20:12.802 "assigned_rate_limits": { 00:20:12.802 "r_mbytes_per_sec": 0, 00:20:12.802 "rw_ios_per_sec": 0, 00:20:12.802 "rw_mbytes_per_sec": 0, 00:20:12.802 "w_mbytes_per_sec": 0 00:20:12.802 }, 00:20:12.802 "block_size": 512, 00:20:12.802 "claimed": false, 00:20:12.802 "driver_specific": { 00:20:12.802 "mp_policy": "active_passive", 00:20:12.802 "nvme": [ 00:20:12.802 { 00:20:12.802 "ctrlr_data": { 00:20:12.802 "ana_reporting": false, 00:20:12.802 "cntlid": 1, 00:20:12.802 "firmware_revision": "24.01.1", 00:20:12.802 "model_number": "SPDK bdev Controller", 00:20:12.802 "multi_ctrlr": true, 00:20:12.802 "oacs": { 00:20:12.802 "firmware": 0, 00:20:12.802 "format": 0, 00:20:12.802 "ns_manage": 0, 00:20:12.802 "security": 0 00:20:12.802 }, 00:20:12.802 "serial_number": "00000000000000000000", 00:20:12.802 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:12.802 "vendor_id": "0x8086" 00:20:12.802 }, 00:20:12.802 "ns_data": { 00:20:12.802 "can_share": true, 00:20:12.802 "id": 1 00:20:12.802 }, 00:20:12.802 "trid": { 00:20:12.802 "adrfam": "IPv4", 00:20:12.802 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:12.802 "traddr": "10.0.0.2", 00:20:12.802 "trsvcid": "4420", 00:20:12.802 "trtype": "TCP" 00:20:12.802 }, 00:20:12.802 "vs": { 00:20:12.802 "nvme_version": "1.3" 00:20:12.802 } 00:20:12.802 } 00:20:12.802 ] 00:20:12.802 }, 00:20:12.802 "name": "nvme0n1", 00:20:12.802 "num_blocks": 2097152, 00:20:12.802 "product_name": "NVMe disk", 00:20:12.802 "supported_io_types": { 00:20:12.802 "abort": true, 00:20:12.802 "compare": true, 00:20:12.802 "compare_and_write": true, 00:20:12.802 "flush": true, 00:20:12.802 "nvme_admin": true, 00:20:12.802 "nvme_io": true, 00:20:12.802 "read": true, 00:20:12.802 "reset": true, 00:20:12.802 "unmap": false, 00:20:12.802 "write": true, 00:20:12.802 "write_zeroes": true 00:20:12.802 }, 00:20:12.802 "uuid": "5e631b9e-2639-43fb-bee4-08029d349333", 00:20:12.802 "zoned": false 00:20:12.802 } 00:20:12.802 ] 00:20:12.802 04:16:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.802 04:16:14 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:12.802 04:16:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.802 04:16:14 -- common/autotest_common.sh@10 -- # set +x 00:20:12.802 [2024-11-26 04:16:14.536425] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:12.802 [2024-11-26 04:16:14.536514] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75fa00 (9): Bad file descriptor 00:20:13.062 [2024-11-26 04:16:14.668819] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:13.062 04:16:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.062 04:16:14 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:13.062 04:16:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.062 04:16:14 -- common/autotest_common.sh@10 -- # set +x 00:20:13.062 [ 00:20:13.062 { 00:20:13.062 "aliases": [ 00:20:13.062 "5e631b9e-2639-43fb-bee4-08029d349333" 00:20:13.062 ], 00:20:13.062 "assigned_rate_limits": { 00:20:13.062 "r_mbytes_per_sec": 0, 00:20:13.062 "rw_ios_per_sec": 0, 00:20:13.062 "rw_mbytes_per_sec": 0, 00:20:13.062 "w_mbytes_per_sec": 0 00:20:13.062 }, 00:20:13.062 "block_size": 512, 00:20:13.062 "claimed": false, 00:20:13.062 "driver_specific": { 00:20:13.062 "mp_policy": "active_passive", 00:20:13.062 "nvme": [ 00:20:13.062 { 00:20:13.062 "ctrlr_data": { 00:20:13.062 "ana_reporting": false, 00:20:13.062 "cntlid": 2, 00:20:13.062 "firmware_revision": "24.01.1", 00:20:13.062 "model_number": "SPDK bdev Controller", 00:20:13.062 "multi_ctrlr": true, 00:20:13.062 "oacs": { 00:20:13.062 "firmware": 0, 00:20:13.062 "format": 0, 00:20:13.062 "ns_manage": 0, 00:20:13.062 "security": 0 00:20:13.062 }, 00:20:13.062 "serial_number": "00000000000000000000", 00:20:13.062 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:13.062 "vendor_id": "0x8086" 00:20:13.062 }, 00:20:13.062 "ns_data": { 00:20:13.062 "can_share": true, 00:20:13.062 "id": 1 00:20:13.062 }, 00:20:13.062 "trid": { 00:20:13.062 "adrfam": "IPv4", 00:20:13.062 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:13.062 "traddr": "10.0.0.2", 00:20:13.062 "trsvcid": "4420", 00:20:13.062 "trtype": "TCP" 00:20:13.062 }, 00:20:13.062 "vs": { 00:20:13.062 "nvme_version": "1.3" 00:20:13.062 } 00:20:13.062 } 00:20:13.062 ] 00:20:13.062 }, 00:20:13.062 "name": "nvme0n1", 00:20:13.062 "num_blocks": 2097152, 00:20:13.062 "product_name": "NVMe disk", 00:20:13.062 "supported_io_types": { 00:20:13.062 "abort": true, 00:20:13.062 "compare": true, 00:20:13.062 "compare_and_write": true, 00:20:13.062 "flush": true, 00:20:13.062 "nvme_admin": true, 00:20:13.062 "nvme_io": true, 00:20:13.062 "read": true, 00:20:13.062 "reset": true, 00:20:13.062 "unmap": false, 00:20:13.062 "write": true, 00:20:13.062 "write_zeroes": true 00:20:13.062 }, 00:20:13.062 "uuid": "5e631b9e-2639-43fb-bee4-08029d349333", 00:20:13.062 "zoned": false 00:20:13.062 } 00:20:13.062 ] 00:20:13.062 04:16:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.062 04:16:14 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.062 04:16:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.062 04:16:14 -- common/autotest_common.sh@10 -- # set +x 00:20:13.062 04:16:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.062 04:16:14 -- host/async_init.sh@53 -- # mktemp 00:20:13.062 04:16:14 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.xCLr1iJZlj 00:20:13.062 04:16:14 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:13.062 04:16:14 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.xCLr1iJZlj 00:20:13.062 04:16:14 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:13.062 04:16:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.062 04:16:14 -- common/autotest_common.sh@10 -- # set +x 00:20:13.062 04:16:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.062 04:16:14 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:13.062 04:16:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.062 04:16:14 -- common/autotest_common.sh@10 -- # set +x 00:20:13.062 [2024-11-26 04:16:14.732543] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:13.062 [2024-11-26 04:16:14.732675] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:13.062 04:16:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.062 04:16:14 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xCLr1iJZlj 00:20:13.062 04:16:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.062 04:16:14 -- common/autotest_common.sh@10 -- # set +x 00:20:13.062 04:16:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.062 04:16:14 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xCLr1iJZlj 00:20:13.062 04:16:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.062 04:16:14 -- common/autotest_common.sh@10 -- # set +x 00:20:13.062 [2024-11-26 04:16:14.752549] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:13.062 nvme0n1 00:20:13.062 04:16:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.062 04:16:14 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:13.062 04:16:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.062 04:16:14 -- common/autotest_common.sh@10 -- # set +x 00:20:13.322 [ 00:20:13.322 { 00:20:13.322 "aliases": [ 00:20:13.322 "5e631b9e-2639-43fb-bee4-08029d349333" 00:20:13.322 ], 00:20:13.322 "assigned_rate_limits": { 00:20:13.322 "r_mbytes_per_sec": 0, 00:20:13.322 "rw_ios_per_sec": 0, 00:20:13.322 "rw_mbytes_per_sec": 0, 00:20:13.322 "w_mbytes_per_sec": 0 00:20:13.322 }, 00:20:13.322 "block_size": 512, 00:20:13.322 "claimed": false, 00:20:13.322 "driver_specific": { 00:20:13.322 "mp_policy": "active_passive", 00:20:13.322 "nvme": [ 00:20:13.322 { 00:20:13.322 "ctrlr_data": { 00:20:13.322 "ana_reporting": false, 00:20:13.322 "cntlid": 3, 00:20:13.322 "firmware_revision": "24.01.1", 00:20:13.322 "model_number": "SPDK bdev Controller", 00:20:13.322 "multi_ctrlr": true, 00:20:13.322 "oacs": { 00:20:13.322 "firmware": 0, 00:20:13.322 "format": 0, 00:20:13.322 "ns_manage": 0, 00:20:13.322 "security": 0 00:20:13.322 }, 00:20:13.322 "serial_number": "00000000000000000000", 00:20:13.322 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:13.322 "vendor_id": "0x8086" 00:20:13.322 }, 00:20:13.322 "ns_data": { 00:20:13.322 "can_share": true, 00:20:13.322 "id": 1 00:20:13.322 }, 00:20:13.322 "trid": { 00:20:13.322 "adrfam": "IPv4", 00:20:13.322 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:13.322 "traddr": "10.0.0.2", 00:20:13.322 "trsvcid": "4421", 00:20:13.322 "trtype": "TCP" 00:20:13.322 }, 00:20:13.322 "vs": { 00:20:13.322 "nvme_version": "1.3" 00:20:13.322 } 00:20:13.322 } 00:20:13.322 ] 00:20:13.322 }, 00:20:13.322 "name": "nvme0n1", 00:20:13.322 "num_blocks": 2097152, 00:20:13.322 "product_name": "NVMe disk", 00:20:13.322 "supported_io_types": { 00:20:13.322 "abort": true, 00:20:13.322 "compare": true, 00:20:13.322 "compare_and_write": true, 00:20:13.322 "flush": true, 00:20:13.322 "nvme_admin": true, 00:20:13.322 "nvme_io": true, 00:20:13.322 "read": true, 00:20:13.322 "reset": true, 00:20:13.322 "unmap": false, 00:20:13.322 "write": true, 00:20:13.322 "write_zeroes": true 00:20:13.322 }, 00:20:13.322 "uuid": "5e631b9e-2639-43fb-bee4-08029d349333", 00:20:13.322 "zoned": false 00:20:13.322 } 00:20:13.322 ] 00:20:13.322 04:16:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.322 04:16:14 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.322 04:16:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.322 04:16:14 -- common/autotest_common.sh@10 -- # set +x 00:20:13.322 04:16:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.322 04:16:14 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.xCLr1iJZlj 00:20:13.322 04:16:14 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:13.322 04:16:14 -- host/async_init.sh@78 -- # nvmftestfini 00:20:13.322 04:16:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:13.322 04:16:14 -- nvmf/common.sh@116 -- # sync 00:20:13.322 04:16:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:13.322 04:16:14 -- nvmf/common.sh@119 -- # set +e 00:20:13.322 04:16:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:13.322 04:16:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:13.322 rmmod nvme_tcp 00:20:13.322 rmmod nvme_fabrics 00:20:13.322 rmmod nvme_keyring 00:20:13.322 04:16:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:13.322 04:16:14 -- nvmf/common.sh@123 -- # set -e 00:20:13.322 04:16:14 -- nvmf/common.sh@124 -- # return 0 00:20:13.322 04:16:14 -- nvmf/common.sh@477 -- # '[' -n 93339 ']' 00:20:13.322 04:16:14 -- nvmf/common.sh@478 -- # killprocess 93339 00:20:13.322 04:16:14 -- common/autotest_common.sh@936 -- # '[' -z 93339 ']' 00:20:13.322 04:16:14 -- common/autotest_common.sh@940 -- # kill -0 93339 00:20:13.322 04:16:14 -- common/autotest_common.sh@941 -- # uname 00:20:13.322 04:16:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:13.322 04:16:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93339 00:20:13.322 04:16:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:13.322 04:16:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:13.322 killing process with pid 93339 00:20:13.322 04:16:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93339' 00:20:13.322 04:16:15 -- common/autotest_common.sh@955 -- # kill 93339 00:20:13.322 04:16:15 -- common/autotest_common.sh@960 -- # wait 93339 00:20:13.581 04:16:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:13.581 04:16:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:13.581 04:16:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:13.581 04:16:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:13.581 04:16:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:13.581 04:16:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.581 04:16:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.581 04:16:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.581 04:16:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:13.581 00:20:13.581 real 0m2.804s 00:20:13.581 user 0m2.625s 00:20:13.581 sys 0m0.692s 00:20:13.581 04:16:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:13.581 04:16:15 -- common/autotest_common.sh@10 -- # set +x 00:20:13.581 ************************************ 00:20:13.581 END TEST nvmf_async_init 00:20:13.581 ************************************ 00:20:13.581 04:16:15 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:13.581 04:16:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:13.581 04:16:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:13.581 04:16:15 -- common/autotest_common.sh@10 -- # set +x 00:20:13.581 ************************************ 00:20:13.581 START TEST dma 00:20:13.581 ************************************ 00:20:13.581 04:16:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:13.840 * Looking for test storage... 00:20:13.840 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:13.840 04:16:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:13.840 04:16:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:13.840 04:16:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:13.840 04:16:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:13.840 04:16:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:13.840 04:16:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:13.840 04:16:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:13.840 04:16:15 -- scripts/common.sh@335 -- # IFS=.-: 00:20:13.840 04:16:15 -- scripts/common.sh@335 -- # read -ra ver1 00:20:13.840 04:16:15 -- scripts/common.sh@336 -- # IFS=.-: 00:20:13.840 04:16:15 -- scripts/common.sh@336 -- # read -ra ver2 00:20:13.840 04:16:15 -- scripts/common.sh@337 -- # local 'op=<' 00:20:13.840 04:16:15 -- scripts/common.sh@339 -- # ver1_l=2 00:20:13.840 04:16:15 -- scripts/common.sh@340 -- # ver2_l=1 00:20:13.840 04:16:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:13.840 04:16:15 -- scripts/common.sh@343 -- # case "$op" in 00:20:13.840 04:16:15 -- scripts/common.sh@344 -- # : 1 00:20:13.840 04:16:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:13.840 04:16:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:13.840 04:16:15 -- scripts/common.sh@364 -- # decimal 1 00:20:13.840 04:16:15 -- scripts/common.sh@352 -- # local d=1 00:20:13.840 04:16:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:13.840 04:16:15 -- scripts/common.sh@354 -- # echo 1 00:20:13.840 04:16:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:13.840 04:16:15 -- scripts/common.sh@365 -- # decimal 2 00:20:13.840 04:16:15 -- scripts/common.sh@352 -- # local d=2 00:20:13.840 04:16:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:13.840 04:16:15 -- scripts/common.sh@354 -- # echo 2 00:20:13.840 04:16:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:13.840 04:16:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:13.840 04:16:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:13.840 04:16:15 -- scripts/common.sh@367 -- # return 0 00:20:13.840 04:16:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:13.840 04:16:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:13.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.840 --rc genhtml_branch_coverage=1 00:20:13.840 --rc genhtml_function_coverage=1 00:20:13.840 --rc genhtml_legend=1 00:20:13.840 --rc geninfo_all_blocks=1 00:20:13.840 --rc geninfo_unexecuted_blocks=1 00:20:13.840 00:20:13.840 ' 00:20:13.840 04:16:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:13.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.840 --rc genhtml_branch_coverage=1 00:20:13.840 --rc genhtml_function_coverage=1 00:20:13.840 --rc genhtml_legend=1 00:20:13.840 --rc geninfo_all_blocks=1 00:20:13.840 --rc geninfo_unexecuted_blocks=1 00:20:13.840 00:20:13.840 ' 00:20:13.840 04:16:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:13.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.840 --rc genhtml_branch_coverage=1 00:20:13.840 --rc genhtml_function_coverage=1 00:20:13.840 --rc genhtml_legend=1 00:20:13.840 --rc geninfo_all_blocks=1 00:20:13.840 --rc geninfo_unexecuted_blocks=1 00:20:13.840 00:20:13.840 ' 00:20:13.840 04:16:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:13.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.840 --rc genhtml_branch_coverage=1 00:20:13.840 --rc genhtml_function_coverage=1 00:20:13.840 --rc genhtml_legend=1 00:20:13.840 --rc geninfo_all_blocks=1 00:20:13.840 --rc geninfo_unexecuted_blocks=1 00:20:13.840 00:20:13.840 ' 00:20:13.840 04:16:15 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:13.840 04:16:15 -- nvmf/common.sh@7 -- # uname -s 00:20:13.840 04:16:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.840 04:16:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.840 04:16:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.840 04:16:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.840 04:16:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.840 04:16:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.840 04:16:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.840 04:16:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.840 04:16:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.840 04:16:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.840 04:16:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:20:13.840 04:16:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:20:13.840 04:16:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.840 04:16:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.840 04:16:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:13.840 04:16:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:13.840 04:16:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.840 04:16:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.840 04:16:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.840 04:16:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.840 04:16:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.840 04:16:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.840 04:16:15 -- paths/export.sh@5 -- # export PATH 00:20:13.840 04:16:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.840 04:16:15 -- nvmf/common.sh@46 -- # : 0 00:20:13.841 04:16:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:13.841 04:16:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:13.841 04:16:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:13.841 04:16:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.841 04:16:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.841 04:16:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:13.841 04:16:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:13.841 04:16:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:13.841 04:16:15 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:13.841 04:16:15 -- host/dma.sh@13 -- # exit 0 00:20:13.841 00:20:13.841 real 0m0.217s 00:20:13.841 user 0m0.129s 00:20:13.841 sys 0m0.099s 00:20:13.841 04:16:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:13.841 04:16:15 -- common/autotest_common.sh@10 -- # set +x 00:20:13.841 ************************************ 00:20:13.841 END TEST dma 00:20:13.841 ************************************ 00:20:13.841 04:16:15 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:13.841 04:16:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:13.841 04:16:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:13.841 04:16:15 -- common/autotest_common.sh@10 -- # set +x 00:20:14.100 ************************************ 00:20:14.100 START TEST nvmf_identify 00:20:14.100 ************************************ 00:20:14.100 04:16:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:14.100 * Looking for test storage... 00:20:14.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:14.101 04:16:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:14.101 04:16:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:14.101 04:16:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:14.101 04:16:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:14.101 04:16:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:14.101 04:16:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:14.101 04:16:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:14.101 04:16:15 -- scripts/common.sh@335 -- # IFS=.-: 00:20:14.101 04:16:15 -- scripts/common.sh@335 -- # read -ra ver1 00:20:14.101 04:16:15 -- scripts/common.sh@336 -- # IFS=.-: 00:20:14.101 04:16:15 -- scripts/common.sh@336 -- # read -ra ver2 00:20:14.101 04:16:15 -- scripts/common.sh@337 -- # local 'op=<' 00:20:14.101 04:16:15 -- scripts/common.sh@339 -- # ver1_l=2 00:20:14.101 04:16:15 -- scripts/common.sh@340 -- # ver2_l=1 00:20:14.101 04:16:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:14.101 04:16:15 -- scripts/common.sh@343 -- # case "$op" in 00:20:14.101 04:16:15 -- scripts/common.sh@344 -- # : 1 00:20:14.101 04:16:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:14.101 04:16:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:14.101 04:16:15 -- scripts/common.sh@364 -- # decimal 1 00:20:14.101 04:16:15 -- scripts/common.sh@352 -- # local d=1 00:20:14.101 04:16:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:14.101 04:16:15 -- scripts/common.sh@354 -- # echo 1 00:20:14.101 04:16:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:14.101 04:16:15 -- scripts/common.sh@365 -- # decimal 2 00:20:14.101 04:16:15 -- scripts/common.sh@352 -- # local d=2 00:20:14.101 04:16:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:14.101 04:16:15 -- scripts/common.sh@354 -- # echo 2 00:20:14.101 04:16:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:14.101 04:16:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:14.101 04:16:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:14.101 04:16:15 -- scripts/common.sh@367 -- # return 0 00:20:14.101 04:16:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:14.101 04:16:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:14.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.101 --rc genhtml_branch_coverage=1 00:20:14.101 --rc genhtml_function_coverage=1 00:20:14.101 --rc genhtml_legend=1 00:20:14.101 --rc geninfo_all_blocks=1 00:20:14.101 --rc geninfo_unexecuted_blocks=1 00:20:14.101 00:20:14.101 ' 00:20:14.101 04:16:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:14.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.101 --rc genhtml_branch_coverage=1 00:20:14.101 --rc genhtml_function_coverage=1 00:20:14.101 --rc genhtml_legend=1 00:20:14.101 --rc geninfo_all_blocks=1 00:20:14.101 --rc geninfo_unexecuted_blocks=1 00:20:14.101 00:20:14.101 ' 00:20:14.101 04:16:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:14.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.101 --rc genhtml_branch_coverage=1 00:20:14.101 --rc genhtml_function_coverage=1 00:20:14.101 --rc genhtml_legend=1 00:20:14.101 --rc geninfo_all_blocks=1 00:20:14.101 --rc geninfo_unexecuted_blocks=1 00:20:14.101 00:20:14.101 ' 00:20:14.101 04:16:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:14.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.101 --rc genhtml_branch_coverage=1 00:20:14.101 --rc genhtml_function_coverage=1 00:20:14.101 --rc genhtml_legend=1 00:20:14.101 --rc geninfo_all_blocks=1 00:20:14.101 --rc geninfo_unexecuted_blocks=1 00:20:14.101 00:20:14.101 ' 00:20:14.101 04:16:15 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:14.101 04:16:15 -- nvmf/common.sh@7 -- # uname -s 00:20:14.101 04:16:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.101 04:16:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.101 04:16:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.101 04:16:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.101 04:16:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:14.101 04:16:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:14.101 04:16:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.101 04:16:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:14.101 04:16:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.101 04:16:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:14.101 04:16:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:20:14.101 04:16:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:20:14.101 04:16:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.101 04:16:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:14.101 04:16:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:14.101 04:16:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:14.101 04:16:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.101 04:16:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.101 04:16:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.101 04:16:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.101 04:16:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.101 04:16:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.101 04:16:15 -- paths/export.sh@5 -- # export PATH 00:20:14.101 04:16:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.101 04:16:15 -- nvmf/common.sh@46 -- # : 0 00:20:14.101 04:16:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:14.101 04:16:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:14.101 04:16:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:14.101 04:16:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:14.101 04:16:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:14.101 04:16:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:14.101 04:16:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:14.101 04:16:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:14.101 04:16:15 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:14.101 04:16:15 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:14.101 04:16:15 -- host/identify.sh@14 -- # nvmftestinit 00:20:14.101 04:16:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:14.101 04:16:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.101 04:16:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:14.101 04:16:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:14.101 04:16:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:14.101 04:16:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.101 04:16:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:14.101 04:16:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.101 04:16:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:14.101 04:16:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:14.101 04:16:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:14.101 04:16:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:14.101 04:16:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:14.101 04:16:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:14.101 04:16:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.101 04:16:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.101 04:16:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:14.101 04:16:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:14.101 04:16:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:14.101 04:16:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:14.101 04:16:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:14.101 04:16:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.101 04:16:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:14.101 04:16:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:14.101 04:16:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:14.101 04:16:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:14.101 04:16:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:14.101 04:16:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:14.361 Cannot find device "nvmf_tgt_br" 00:20:14.361 04:16:15 -- nvmf/common.sh@154 -- # true 00:20:14.361 04:16:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:14.361 Cannot find device "nvmf_tgt_br2" 00:20:14.361 04:16:15 -- nvmf/common.sh@155 -- # true 00:20:14.361 04:16:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:14.361 04:16:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:14.361 Cannot find device "nvmf_tgt_br" 00:20:14.361 04:16:15 -- nvmf/common.sh@157 -- # true 00:20:14.361 04:16:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:14.361 Cannot find device "nvmf_tgt_br2" 00:20:14.361 04:16:15 -- nvmf/common.sh@158 -- # true 00:20:14.361 04:16:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:14.361 04:16:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:14.361 04:16:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:14.361 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:14.361 04:16:15 -- nvmf/common.sh@161 -- # true 00:20:14.361 04:16:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:14.361 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:14.361 04:16:15 -- nvmf/common.sh@162 -- # true 00:20:14.361 04:16:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:14.361 04:16:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:14.361 04:16:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:14.361 04:16:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:14.361 04:16:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:14.361 04:16:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:14.361 04:16:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:14.361 04:16:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:14.361 04:16:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:14.361 04:16:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:14.361 04:16:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:14.361 04:16:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:14.361 04:16:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:14.620 04:16:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:14.620 04:16:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:14.620 04:16:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:14.620 04:16:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:14.620 04:16:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:14.620 04:16:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:14.620 04:16:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:14.620 04:16:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:14.620 04:16:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:14.620 04:16:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:14.620 04:16:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:14.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:20:14.620 00:20:14.620 --- 10.0.0.2 ping statistics --- 00:20:14.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.620 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:14.620 04:16:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:14.620 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:14.620 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:20:14.620 00:20:14.620 --- 10.0.0.3 ping statistics --- 00:20:14.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.620 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:14.620 04:16:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:14.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:14.620 00:20:14.620 --- 10.0.0.1 ping statistics --- 00:20:14.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.620 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:14.620 04:16:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.620 04:16:16 -- nvmf/common.sh@421 -- # return 0 00:20:14.620 04:16:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:14.620 04:16:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.620 04:16:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:14.620 04:16:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:14.620 04:16:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.620 04:16:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:14.620 04:16:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:14.620 04:16:16 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:14.620 04:16:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:14.620 04:16:16 -- common/autotest_common.sh@10 -- # set +x 00:20:14.620 04:16:16 -- host/identify.sh@19 -- # nvmfpid=93622 00:20:14.620 04:16:16 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:14.620 04:16:16 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:14.620 04:16:16 -- host/identify.sh@23 -- # waitforlisten 93622 00:20:14.620 04:16:16 -- common/autotest_common.sh@829 -- # '[' -z 93622 ']' 00:20:14.620 04:16:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.620 04:16:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:14.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.620 04:16:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.620 04:16:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:14.620 04:16:16 -- common/autotest_common.sh@10 -- # set +x 00:20:14.620 [2024-11-26 04:16:16.307734] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:14.620 [2024-11-26 04:16:16.307823] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.879 [2024-11-26 04:16:16.451584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:14.879 [2024-11-26 04:16:16.540959] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:14.879 [2024-11-26 04:16:16.541159] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.879 [2024-11-26 04:16:16.541177] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.879 [2024-11-26 04:16:16.541188] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.879 [2024-11-26 04:16:16.541357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.879 [2024-11-26 04:16:16.541507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.879 [2024-11-26 04:16:16.541661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.879 [2024-11-26 04:16:16.541672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.816 04:16:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:15.816 04:16:17 -- common/autotest_common.sh@862 -- # return 0 00:20:15.816 04:16:17 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:15.816 04:16:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.816 04:16:17 -- common/autotest_common.sh@10 -- # set +x 00:20:15.816 [2024-11-26 04:16:17.356277] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.816 04:16:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.816 04:16:17 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:15.816 04:16:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:15.816 04:16:17 -- common/autotest_common.sh@10 -- # set +x 00:20:15.816 04:16:17 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:15.816 04:16:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.816 04:16:17 -- common/autotest_common.sh@10 -- # set +x 00:20:15.816 Malloc0 00:20:15.816 04:16:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.816 04:16:17 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:15.816 04:16:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.816 04:16:17 -- common/autotest_common.sh@10 -- # set +x 00:20:15.816 04:16:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.816 04:16:17 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:15.816 04:16:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.816 04:16:17 -- common/autotest_common.sh@10 -- # set +x 00:20:15.816 04:16:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.816 04:16:17 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:15.816 04:16:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.816 04:16:17 -- common/autotest_common.sh@10 -- # set +x 00:20:15.816 [2024-11-26 04:16:17.485406] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.816 04:16:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.816 04:16:17 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:15.816 04:16:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.816 04:16:17 -- common/autotest_common.sh@10 -- # set +x 00:20:15.816 04:16:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.816 04:16:17 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:15.816 04:16:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.816 04:16:17 -- common/autotest_common.sh@10 -- # set +x 00:20:15.816 [2024-11-26 04:16:17.505124] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:15.816 [ 00:20:15.816 { 00:20:15.816 "allow_any_host": true, 00:20:15.816 "hosts": [], 00:20:15.816 "listen_addresses": [ 00:20:15.816 { 00:20:15.816 "adrfam": "IPv4", 00:20:15.816 "traddr": "10.0.0.2", 00:20:15.816 "transport": "TCP", 00:20:15.816 "trsvcid": "4420", 00:20:15.816 "trtype": "TCP" 00:20:15.816 } 00:20:15.816 ], 00:20:15.816 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:15.816 "subtype": "Discovery" 00:20:15.816 }, 00:20:15.816 { 00:20:15.816 "allow_any_host": true, 00:20:15.816 "hosts": [], 00:20:15.816 "listen_addresses": [ 00:20:15.816 { 00:20:15.816 "adrfam": "IPv4", 00:20:15.816 "traddr": "10.0.0.2", 00:20:15.816 "transport": "TCP", 00:20:15.816 "trsvcid": "4420", 00:20:15.816 "trtype": "TCP" 00:20:15.816 } 00:20:15.816 ], 00:20:15.816 "max_cntlid": 65519, 00:20:15.816 "max_namespaces": 32, 00:20:15.816 "min_cntlid": 1, 00:20:15.816 "model_number": "SPDK bdev Controller", 00:20:15.816 "namespaces": [ 00:20:15.816 { 00:20:15.816 "bdev_name": "Malloc0", 00:20:15.816 "eui64": "ABCDEF0123456789", 00:20:15.816 "name": "Malloc0", 00:20:15.816 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:15.816 "nsid": 1, 00:20:15.816 "uuid": "9ae6c627-5415-47a6-a024-6821b846832b" 00:20:15.816 } 00:20:15.816 ], 00:20:15.816 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.816 "serial_number": "SPDK00000000000001", 00:20:15.816 "subtype": "NVMe" 00:20:15.816 } 00:20:15.816 ] 00:20:15.816 04:16:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.816 04:16:17 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:15.816 [2024-11-26 04:16:17.543148] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:15.816 [2024-11-26 04:16:17.543214] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93675 ] 00:20:16.078 [2024-11-26 04:16:17.679515] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:16.078 [2024-11-26 04:16:17.679586] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:16.078 [2024-11-26 04:16:17.679593] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:16.078 [2024-11-26 04:16:17.679602] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:16.079 [2024-11-26 04:16:17.679613] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:16.079 [2024-11-26 04:16:17.679783] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:16.079 [2024-11-26 04:16:17.679851] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x9d9510 0 00:20:16.079 [2024-11-26 04:16:17.693733] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:16.079 [2024-11-26 04:16:17.693765] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:16.079 [2024-11-26 04:16:17.693779] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:16.079 [2024-11-26 04:16:17.693782] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:16.079 [2024-11-26 04:16:17.693833] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.079 [2024-11-26 04:16:17.693841] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.079 [2024-11-26 04:16:17.693845] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d9510) 00:20:16.079 [2024-11-26 04:16:17.693859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:16.079 [2024-11-26 04:16:17.693890] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa258a0, cid 0, qid 0 00:20:16.079 [2024-11-26 04:16:17.701726] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.079 [2024-11-26 04:16:17.701745] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.079 [2024-11-26 04:16:17.701749] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.079 [2024-11-26 04:16:17.701764] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa258a0) on tqpair=0x9d9510 00:20:16.079 [2024-11-26 04:16:17.701778] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:16.079 [2024-11-26 04:16:17.701785] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:16.079 [2024-11-26 04:16:17.701791] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:16.079 [2024-11-26 04:16:17.701807] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.079 [2024-11-26 04:16:17.701811] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.079 [2024-11-26 04:16:17.701815] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d9510) 00:20:16.079 [2024-11-26 04:16:17.701823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.079 [2024-11-26 04:16:17.701849] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa258a0, cid 0, qid 0 00:20:16.079 [2024-11-26 04:16:17.701935] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.079 [2024-11-26 04:16:17.701942] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.079 [2024-11-26 04:16:17.701945] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.079 [2024-11-26 04:16:17.701949] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa258a0) on tqpair=0x9d9510 00:20:16.079 [2024-11-26 04:16:17.701954] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:16.079 [2024-11-26 04:16:17.701961] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:16.079 [2024-11-26 04:16:17.701968] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.079 [2024-11-26 04:16:17.701972] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.079 [2024-11-26 04:16:17.701975] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d9510) 00:20:16.079 [2024-11-26 04:16:17.701982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.079 [2024-11-26 04:16:17.702025] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa258a0, cid 0, qid 0 00:20:16.079 [2024-11-26 04:16:17.702108] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.079 [2024-11-26 04:16:17.702114] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.079 [2024-11-26 04:16:17.702118] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.079 [2024-11-26 04:16:17.702121] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa258a0) on tqpair=0x9d9510 00:20:16.079 [2024-11-26 04:16:17.702128] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:16.079 [2024-11-26 04:16:17.702136] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:16.079 [2024-11-26 04:16:17.702143] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.079 [2024-11-26 04:16:17.702147] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.079 [2024-11-26 04:16:17.702150] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d9510) 00:20:16.079 [2024-11-26 04:16:17.702157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.079 [2024-11-26 04:16:17.702176] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa258a0, cid 0, qid 0 00:20:16.079 [2024-11-26 04:16:17.702239] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.079 [2024-11-26 04:16:17.702245] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.079 [2024-11-26 04:16:17.702249] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.079 [2024-11-26 04:16:17.702252] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa258a0) on tqpair=0x9d9510 00:20:16.079 [2024-11-26 04:16:17.702258] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:16.079 [2024-11-26 04:16:17.702268] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.079 [2024-11-26 04:16:17.702272] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.079 [2024-11-26 04:16:17.702275] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d9510) 00:20:16.079 [2024-11-26 04:16:17.702282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.079 [2024-11-26 04:16:17.702300] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa258a0, cid 0, qid 0 00:20:16.079 [2024-11-26 04:16:17.702373] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.079 [2024-11-26 04:16:17.702379] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.079 [2024-11-26 04:16:17.702383] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.079 [2024-11-26 04:16:17.702386] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa258a0) on tqpair=0x9d9510 00:20:16.079 [2024-11-26 04:16:17.702391] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:16.079 [2024-11-26 04:16:17.702411] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:16.079 [2024-11-26 04:16:17.702418] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:16.079 [2024-11-26 04:16:17.702523] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:16.079 [2024-11-26 04:16:17.702539] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:16.079 [2024-11-26 04:16:17.702548] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.079 [2024-11-26 04:16:17.702552] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.079 [2024-11-26 04:16:17.702555] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d9510) 00:20:16.079 [2024-11-26 04:16:17.702562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.079 [2024-11-26 04:16:17.702580] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa258a0, cid 0, qid 0 00:20:16.079 [2024-11-26 04:16:17.702646] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.079 [2024-11-26 04:16:17.702653] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.079 [2024-11-26 04:16:17.702656] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.079 [2024-11-26 04:16:17.702659] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa258a0) on tqpair=0x9d9510 00:20:16.079 [2024-11-26 04:16:17.702664] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:16.079 [2024-11-26 04:16:17.702673] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.079 [2024-11-26 04:16:17.702677] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.079 [2024-11-26 04:16:17.702680] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d9510) 00:20:16.079 [2024-11-26 04:16:17.702686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.079 [2024-11-26 04:16:17.702703] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa258a0, cid 0, qid 0 00:20:16.079 [2024-11-26 04:16:17.702777] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.079 [2024-11-26 04:16:17.702785] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.079 [2024-11-26 04:16:17.702788] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.079 [2024-11-26 04:16:17.702792] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa258a0) on tqpair=0x9d9510 00:20:16.079 [2024-11-26 04:16:17.702796] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:16.079 [2024-11-26 04:16:17.702801] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:16.079 [2024-11-26 04:16:17.702809] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:16.079 [2024-11-26 04:16:17.702824] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:16.079 [2024-11-26 04:16:17.702833] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.079 [2024-11-26 04:16:17.702837] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.079 [2024-11-26 04:16:17.702840] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d9510) 00:20:16.079 [2024-11-26 04:16:17.702847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.079 [2024-11-26 04:16:17.702867] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa258a0, cid 0, qid 0 00:20:16.079 [2024-11-26 04:16:17.702963] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:16.079 [2024-11-26 04:16:17.702969] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:16.079 [2024-11-26 04:16:17.702973] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:16.079 [2024-11-26 04:16:17.702977] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d9510): datao=0, datal=4096, cccid=0 00:20:16.079 [2024-11-26 04:16:17.702981] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa258a0) on tqpair(0x9d9510): expected_datao=0, payload_size=4096 00:20:16.080 [2024-11-26 04:16:17.702989] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.702993] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703001] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.080 [2024-11-26 04:16:17.703006] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.080 [2024-11-26 04:16:17.703009] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703013] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa258a0) on tqpair=0x9d9510 00:20:16.080 [2024-11-26 04:16:17.703021] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:16.080 [2024-11-26 04:16:17.703026] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:16.080 [2024-11-26 04:16:17.703030] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:16.080 [2024-11-26 04:16:17.703035] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:16.080 [2024-11-26 04:16:17.703040] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:16.080 [2024-11-26 04:16:17.703044] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:16.080 [2024-11-26 04:16:17.703057] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:16.080 [2024-11-26 04:16:17.703064] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703068] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703071] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d9510) 00:20:16.080 [2024-11-26 04:16:17.703078] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:16.080 [2024-11-26 04:16:17.703097] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa258a0, cid 0, qid 0 00:20:16.080 [2024-11-26 04:16:17.703176] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.080 [2024-11-26 04:16:17.703182] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.080 [2024-11-26 04:16:17.703186] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703189] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa258a0) on tqpair=0x9d9510 00:20:16.080 [2024-11-26 04:16:17.703197] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703201] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703204] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d9510) 00:20:16.080 [2024-11-26 04:16:17.703210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:16.080 [2024-11-26 04:16:17.703215] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703218] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703222] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x9d9510) 00:20:16.080 [2024-11-26 04:16:17.703227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:16.080 [2024-11-26 04:16:17.703232] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703235] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703238] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x9d9510) 00:20:16.080 [2024-11-26 04:16:17.703243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:16.080 [2024-11-26 04:16:17.703248] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703251] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703254] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.080 [2024-11-26 04:16:17.703259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:16.080 [2024-11-26 04:16:17.703263] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:16.080 [2024-11-26 04:16:17.703275] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:16.080 [2024-11-26 04:16:17.703281] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703285] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703288] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d9510) 00:20:16.080 [2024-11-26 04:16:17.703294] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.080 [2024-11-26 04:16:17.703314] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa258a0, cid 0, qid 0 00:20:16.080 [2024-11-26 04:16:17.703320] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25a00, cid 1, qid 0 00:20:16.080 [2024-11-26 04:16:17.703324] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25b60, cid 2, qid 0 00:20:16.080 [2024-11-26 04:16:17.703328] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.080 [2024-11-26 04:16:17.703332] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25e20, cid 4, qid 0 00:20:16.080 [2024-11-26 04:16:17.703436] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.080 [2024-11-26 04:16:17.703443] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.080 [2024-11-26 04:16:17.703446] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703449] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25e20) on tqpair=0x9d9510 00:20:16.080 [2024-11-26 04:16:17.703454] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:16.080 [2024-11-26 04:16:17.703460] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:16.080 [2024-11-26 04:16:17.703469] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703473] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703476] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d9510) 00:20:16.080 [2024-11-26 04:16:17.703482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.080 [2024-11-26 04:16:17.703500] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25e20, cid 4, qid 0 00:20:16.080 [2024-11-26 04:16:17.703576] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:16.080 [2024-11-26 04:16:17.703582] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:16.080 [2024-11-26 04:16:17.703585] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703588] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d9510): datao=0, datal=4096, cccid=4 00:20:16.080 [2024-11-26 04:16:17.703592] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa25e20) on tqpair(0x9d9510): expected_datao=0, payload_size=4096 00:20:16.080 [2024-11-26 04:16:17.703599] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703603] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703610] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.080 [2024-11-26 04:16:17.703615] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.080 [2024-11-26 04:16:17.703618] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703622] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25e20) on tqpair=0x9d9510 00:20:16.080 [2024-11-26 04:16:17.703634] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:16.080 [2024-11-26 04:16:17.703660] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703666] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703669] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d9510) 00:20:16.080 [2024-11-26 04:16:17.703676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.080 [2024-11-26 04:16:17.703682] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703686] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703689] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9d9510) 00:20:16.080 [2024-11-26 04:16:17.703694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:16.080 [2024-11-26 04:16:17.703745] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25e20, cid 4, qid 0 00:20:16.080 [2024-11-26 04:16:17.703754] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25f80, cid 5, qid 0 00:20:16.080 [2024-11-26 04:16:17.703865] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:16.080 [2024-11-26 04:16:17.703871] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:16.080 [2024-11-26 04:16:17.703875] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703878] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d9510): datao=0, datal=1024, cccid=4 00:20:16.080 [2024-11-26 04:16:17.703882] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa25e20) on tqpair(0x9d9510): expected_datao=0, payload_size=1024 00:20:16.080 [2024-11-26 04:16:17.703888] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703892] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703896] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.080 [2024-11-26 04:16:17.703901] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.080 [2024-11-26 04:16:17.703904] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.703908] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25f80) on tqpair=0x9d9510 00:20:16.080 [2024-11-26 04:16:17.749734] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.080 [2024-11-26 04:16:17.749752] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.080 [2024-11-26 04:16:17.749756] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.749760] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25e20) on tqpair=0x9d9510 00:20:16.080 [2024-11-26 04:16:17.749773] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.749778] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.080 [2024-11-26 04:16:17.749781] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d9510) 00:20:16.081 [2024-11-26 04:16:17.749789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.081 [2024-11-26 04:16:17.749819] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25e20, cid 4, qid 0 00:20:16.081 [2024-11-26 04:16:17.749920] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:16.081 [2024-11-26 04:16:17.749926] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:16.081 [2024-11-26 04:16:17.749929] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:16.081 [2024-11-26 04:16:17.749933] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d9510): datao=0, datal=3072, cccid=4 00:20:16.081 [2024-11-26 04:16:17.749937] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa25e20) on tqpair(0x9d9510): expected_datao=0, payload_size=3072 00:20:16.081 [2024-11-26 04:16:17.749944] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:16.081 [2024-11-26 04:16:17.749947] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:16.081 [2024-11-26 04:16:17.749955] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.081 [2024-11-26 04:16:17.749960] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.081 [2024-11-26 04:16:17.749963] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.081 [2024-11-26 04:16:17.749967] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25e20) on tqpair=0x9d9510 00:20:16.081 [2024-11-26 04:16:17.749976] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.081 [2024-11-26 04:16:17.749980] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.081 [2024-11-26 04:16:17.749983] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d9510) 00:20:16.081 [2024-11-26 04:16:17.749989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.081 [2024-11-26 04:16:17.750082] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25e20, cid 4, qid 0 00:20:16.081 [2024-11-26 04:16:17.750173] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:16.081 [2024-11-26 04:16:17.750180] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:16.081 [2024-11-26 04:16:17.750184] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:16.081 [2024-11-26 04:16:17.750187] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d9510): datao=0, datal=8, cccid=4 00:20:16.081 [2024-11-26 04:16:17.750192] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa25e20) on tqpair(0x9d9510): expected_datao=0, payload_size=8 00:20:16.081 [2024-11-26 04:16:17.750199] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:16.081 [2024-11-26 04:16:17.750203] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:16.081 ===================================================== 00:20:16.081 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:16.081 ===================================================== 00:20:16.081 Controller Capabilities/Features 00:20:16.081 ================================ 00:20:16.081 Vendor ID: 0000 00:20:16.081 Subsystem Vendor ID: 0000 00:20:16.081 Serial Number: .................... 00:20:16.081 Model Number: ........................................ 00:20:16.081 Firmware Version: 24.01.1 00:20:16.081 Recommended Arb Burst: 0 00:20:16.081 IEEE OUI Identifier: 00 00 00 00:20:16.081 Multi-path I/O 00:20:16.081 May have multiple subsystem ports: No 00:20:16.081 May have multiple controllers: No 00:20:16.081 Associated with SR-IOV VF: No 00:20:16.081 Max Data Transfer Size: 131072 00:20:16.081 Max Number of Namespaces: 0 00:20:16.081 Max Number of I/O Queues: 1024 00:20:16.081 NVMe Specification Version (VS): 1.3 00:20:16.081 NVMe Specification Version (Identify): 1.3 00:20:16.081 Maximum Queue Entries: 128 00:20:16.081 Contiguous Queues Required: Yes 00:20:16.081 Arbitration Mechanisms Supported 00:20:16.081 Weighted Round Robin: Not Supported 00:20:16.081 Vendor Specific: Not Supported 00:20:16.081 Reset Timeout: 15000 ms 00:20:16.081 Doorbell Stride: 4 bytes 00:20:16.081 NVM Subsystem Reset: Not Supported 00:20:16.081 Command Sets Supported 00:20:16.081 NVM Command Set: Supported 00:20:16.081 Boot Partition: Not Supported 00:20:16.081 Memory Page Size Minimum: 4096 bytes 00:20:16.081 Memory Page Size Maximum: 4096 bytes 00:20:16.081 Persistent Memory Region: Not Supported 00:20:16.081 Optional Asynchronous Events Supported 00:20:16.081 Namespace Attribute Notices: Not Supported 00:20:16.081 Firmware Activation Notices: Not Supported 00:20:16.081 ANA Change Notices: Not Supported 00:20:16.081 PLE Aggregate Log Change Notices: Not Supported 00:20:16.081 LBA Status Info Alert Notices: Not Supported 00:20:16.081 EGE Aggregate Log Change Notices: Not Supported 00:20:16.081 Normal NVM Subsystem Shutdown event: Not Supported 00:20:16.081 Zone Descriptor Change Notices: Not Supported 00:20:16.081 Discovery Log Change Notices: Supported 00:20:16.081 Controller Attributes 00:20:16.081 128-bit Host Identifier: Not Supported 00:20:16.081 Non-Operational Permissive Mode: Not Supported 00:20:16.081 NVM Sets: Not Supported 00:20:16.081 Read Recovery Levels: Not Supported 00:20:16.081 Endurance Groups: Not Supported 00:20:16.081 Predictable Latency Mode: Not Supported 00:20:16.081 Traffic Based Keep ALive: Not Supported 00:20:16.081 Namespace Granularity: Not Supported 00:20:16.081 SQ Associations: Not Supported 00:20:16.081 UUID List: Not Supported 00:20:16.081 Multi-Domain Subsystem: Not Supported 00:20:16.081 Fixed Capacity Management: Not Supported 00:20:16.081 Variable Capacity Management: Not Supported 00:20:16.081 Delete Endurance Group: Not Supported 00:20:16.081 Delete NVM Set: Not Supported 00:20:16.081 Extended LBA Formats Supported: Not Supported 00:20:16.081 Flexible Data Placement Supported: Not Supported 00:20:16.081 00:20:16.081 Controller Memory Buffer Support 00:20:16.081 ================================ 00:20:16.081 Supported: No 00:20:16.081 00:20:16.081 Persistent Memory Region Support 00:20:16.081 ================================ 00:20:16.081 Supported: No 00:20:16.081 00:20:16.081 Admin Command Set Attributes 00:20:16.081 ============================ 00:20:16.081 Security Send/Receive: Not Supported 00:20:16.081 Format NVM: Not Supported 00:20:16.081 Firmware Activate/Download: Not Supported 00:20:16.081 Namespace Management: Not Supported 00:20:16.081 Device Self-Test: Not Supported 00:20:16.081 Directives: Not Supported 00:20:16.081 NVMe-MI: Not Supported 00:20:16.081 Virtualization Management: Not Supported 00:20:16.081 Doorbell Buffer Config: Not Supported 00:20:16.081 Get LBA Status Capability: Not Supported 00:20:16.081 Command & Feature Lockdown Capability: Not Supported 00:20:16.081 Abort Command Limit: 1 00:20:16.081 Async Event Request Limit: 4 00:20:16.081 Number of Firmware Slots: N/A 00:20:16.081 Firmware Slot 1 Read-Only: N/A 00:20:16.081 Fi[2024-11-26 04:16:17.794785] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.081 [2024-11-26 04:16:17.794804] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.081 [2024-11-26 04:16:17.794808] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.081 [2024-11-26 04:16:17.794812] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25e20) on tqpair=0x9d9510 00:20:16.081 rmware Activation Without Reset: N/A 00:20:16.081 Multiple Update Detection Support: N/A 00:20:16.081 Firmware Update Granularity: No Information Provided 00:20:16.081 Per-Namespace SMART Log: No 00:20:16.081 Asymmetric Namespace Access Log Page: Not Supported 00:20:16.081 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:16.081 Command Effects Log Page: Not Supported 00:20:16.081 Get Log Page Extended Data: Supported 00:20:16.081 Telemetry Log Pages: Not Supported 00:20:16.081 Persistent Event Log Pages: Not Supported 00:20:16.081 Supported Log Pages Log Page: May Support 00:20:16.081 Commands Supported & Effects Log Page: Not Supported 00:20:16.081 Feature Identifiers & Effects Log Page:May Support 00:20:16.081 NVMe-MI Commands & Effects Log Page: May Support 00:20:16.081 Data Area 4 for Telemetry Log: Not Supported 00:20:16.081 Error Log Page Entries Supported: 128 00:20:16.081 Keep Alive: Not Supported 00:20:16.081 00:20:16.081 NVM Command Set Attributes 00:20:16.081 ========================== 00:20:16.081 Submission Queue Entry Size 00:20:16.081 Max: 1 00:20:16.081 Min: 1 00:20:16.081 Completion Queue Entry Size 00:20:16.081 Max: 1 00:20:16.081 Min: 1 00:20:16.081 Number of Namespaces: 0 00:20:16.081 Compare Command: Not Supported 00:20:16.081 Write Uncorrectable Command: Not Supported 00:20:16.081 Dataset Management Command: Not Supported 00:20:16.081 Write Zeroes Command: Not Supported 00:20:16.081 Set Features Save Field: Not Supported 00:20:16.081 Reservations: Not Supported 00:20:16.081 Timestamp: Not Supported 00:20:16.081 Copy: Not Supported 00:20:16.081 Volatile Write Cache: Not Present 00:20:16.081 Atomic Write Unit (Normal): 1 00:20:16.081 Atomic Write Unit (PFail): 1 00:20:16.081 Atomic Compare & Write Unit: 1 00:20:16.081 Fused Compare & Write: Supported 00:20:16.081 Scatter-Gather List 00:20:16.081 SGL Command Set: Supported 00:20:16.081 SGL Keyed: Supported 00:20:16.081 SGL Bit Bucket Descriptor: Not Supported 00:20:16.081 SGL Metadata Pointer: Not Supported 00:20:16.081 Oversized SGL: Not Supported 00:20:16.081 SGL Metadata Address: Not Supported 00:20:16.081 SGL Offset: Supported 00:20:16.081 Transport SGL Data Block: Not Supported 00:20:16.081 Replay Protected Memory Block: Not Supported 00:20:16.081 00:20:16.081 Firmware Slot Information 00:20:16.081 ========================= 00:20:16.082 Active slot: 0 00:20:16.082 00:20:16.082 00:20:16.082 Error Log 00:20:16.082 ========= 00:20:16.082 00:20:16.082 Active Namespaces 00:20:16.082 ================= 00:20:16.082 Discovery Log Page 00:20:16.082 ================== 00:20:16.082 Generation Counter: 2 00:20:16.082 Number of Records: 2 00:20:16.082 Record Format: 0 00:20:16.082 00:20:16.082 Discovery Log Entry 0 00:20:16.082 ---------------------- 00:20:16.082 Transport Type: 3 (TCP) 00:20:16.082 Address Family: 1 (IPv4) 00:20:16.082 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:16.082 Entry Flags: 00:20:16.082 Duplicate Returned Information: 1 00:20:16.082 Explicit Persistent Connection Support for Discovery: 1 00:20:16.082 Transport Requirements: 00:20:16.082 Secure Channel: Not Required 00:20:16.082 Port ID: 0 (0x0000) 00:20:16.082 Controller ID: 65535 (0xffff) 00:20:16.082 Admin Max SQ Size: 128 00:20:16.082 Transport Service Identifier: 4420 00:20:16.082 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:16.082 Transport Address: 10.0.0.2 00:20:16.082 Discovery Log Entry 1 00:20:16.082 ---------------------- 00:20:16.082 Transport Type: 3 (TCP) 00:20:16.082 Address Family: 1 (IPv4) 00:20:16.082 Subsystem Type: 2 (NVM Subsystem) 00:20:16.082 Entry Flags: 00:20:16.082 Duplicate Returned Information: 0 00:20:16.082 Explicit Persistent Connection Support for Discovery: 0 00:20:16.082 Transport Requirements: 00:20:16.082 Secure Channel: Not Required 00:20:16.082 Port ID: 0 (0x0000) 00:20:16.082 Controller ID: 65535 (0xffff) 00:20:16.082 Admin Max SQ Size: 128 00:20:16.082 Transport Service Identifier: 4420 00:20:16.082 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:16.082 Transport Address: 10.0.0.2 [2024-11-26 04:16:17.794931] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:16.082 [2024-11-26 04:16:17.794950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.082 [2024-11-26 04:16:17.794958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.082 [2024-11-26 04:16:17.794963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.082 [2024-11-26 04:16:17.794968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.082 [2024-11-26 04:16:17.794977] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.082 [2024-11-26 04:16:17.794981] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.082 [2024-11-26 04:16:17.794985] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.082 [2024-11-26 04:16:17.794992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.082 [2024-11-26 04:16:17.795030] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.082 [2024-11-26 04:16:17.795096] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.082 [2024-11-26 04:16:17.795103] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.082 [2024-11-26 04:16:17.795106] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.082 [2024-11-26 04:16:17.795110] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.082 [2024-11-26 04:16:17.795118] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.082 [2024-11-26 04:16:17.795128] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.082 [2024-11-26 04:16:17.795132] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.082 [2024-11-26 04:16:17.795144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.082 [2024-11-26 04:16:17.795166] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.082 [2024-11-26 04:16:17.795247] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.082 [2024-11-26 04:16:17.795253] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.082 [2024-11-26 04:16:17.795256] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.082 [2024-11-26 04:16:17.795260] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.082 [2024-11-26 04:16:17.795265] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:16.082 [2024-11-26 04:16:17.795269] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:16.082 [2024-11-26 04:16:17.795278] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.082 [2024-11-26 04:16:17.795282] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.082 [2024-11-26 04:16:17.795285] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.082 [2024-11-26 04:16:17.795292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.082 [2024-11-26 04:16:17.795309] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.082 [2024-11-26 04:16:17.795372] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.082 [2024-11-26 04:16:17.795378] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.082 [2024-11-26 04:16:17.795381] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.082 [2024-11-26 04:16:17.795384] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.082 [2024-11-26 04:16:17.795394] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.082 [2024-11-26 04:16:17.795398] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.082 [2024-11-26 04:16:17.795404] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.082 [2024-11-26 04:16:17.795410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.082 [2024-11-26 04:16:17.795427] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.082 [2024-11-26 04:16:17.795494] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.082 [2024-11-26 04:16:17.795499] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.082 [2024-11-26 04:16:17.795503] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.082 [2024-11-26 04:16:17.795506] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.082 [2024-11-26 04:16:17.795515] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.082 [2024-11-26 04:16:17.795519] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.082 [2024-11-26 04:16:17.795522] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.082 [2024-11-26 04:16:17.795529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.082 [2024-11-26 04:16:17.795545] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.082 [2024-11-26 04:16:17.795610] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.082 [2024-11-26 04:16:17.795616] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.082 [2024-11-26 04:16:17.795619] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.082 [2024-11-26 04:16:17.795623] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.082 [2024-11-26 04:16:17.795632] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.082 [2024-11-26 04:16:17.795636] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.082 [2024-11-26 04:16:17.795639] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.082 [2024-11-26 04:16:17.795645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.082 [2024-11-26 04:16:17.795669] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.082 [2024-11-26 04:16:17.795731] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.082 [2024-11-26 04:16:17.795739] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.082 [2024-11-26 04:16:17.795742] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.082 [2024-11-26 04:16:17.795745] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.082 [2024-11-26 04:16:17.795754] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.082 [2024-11-26 04:16:17.795759] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.082 [2024-11-26 04:16:17.795762] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.082 [2024-11-26 04:16:17.795768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.082 [2024-11-26 04:16:17.795787] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.082 [2024-11-26 04:16:17.795854] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.082 [2024-11-26 04:16:17.795861] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.082 [2024-11-26 04:16:17.795864] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.082 [2024-11-26 04:16:17.795867] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.082 [2024-11-26 04:16:17.795876] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.082 [2024-11-26 04:16:17.795880] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.082 [2024-11-26 04:16:17.795884] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.082 [2024-11-26 04:16:17.795890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.082 [2024-11-26 04:16:17.795907] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.082 [2024-11-26 04:16:17.795972] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.082 [2024-11-26 04:16:17.795978] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.082 [2024-11-26 04:16:17.795981] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.082 [2024-11-26 04:16:17.795984] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.082 [2024-11-26 04:16:17.795993] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.795997] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796000] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.083 [2024-11-26 04:16:17.796007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.083 [2024-11-26 04:16:17.796023] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.083 [2024-11-26 04:16:17.796088] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.083 [2024-11-26 04:16:17.796094] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.083 [2024-11-26 04:16:17.796097] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796101] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.083 [2024-11-26 04:16:17.796109] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796113] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796117] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.083 [2024-11-26 04:16:17.796123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.083 [2024-11-26 04:16:17.796140] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.083 [2024-11-26 04:16:17.796223] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.083 [2024-11-26 04:16:17.796229] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.083 [2024-11-26 04:16:17.796232] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796236] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.083 [2024-11-26 04:16:17.796244] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796249] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796252] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.083 [2024-11-26 04:16:17.796258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.083 [2024-11-26 04:16:17.796274] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.083 [2024-11-26 04:16:17.796337] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.083 [2024-11-26 04:16:17.796343] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.083 [2024-11-26 04:16:17.796346] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796350] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.083 [2024-11-26 04:16:17.796358] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796363] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796366] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.083 [2024-11-26 04:16:17.796372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.083 [2024-11-26 04:16:17.796389] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.083 [2024-11-26 04:16:17.796462] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.083 [2024-11-26 04:16:17.796468] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.083 [2024-11-26 04:16:17.796471] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796475] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.083 [2024-11-26 04:16:17.796484] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796488] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796491] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.083 [2024-11-26 04:16:17.796497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.083 [2024-11-26 04:16:17.796514] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.083 [2024-11-26 04:16:17.796576] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.083 [2024-11-26 04:16:17.796582] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.083 [2024-11-26 04:16:17.796585] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796588] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.083 [2024-11-26 04:16:17.796597] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796601] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796604] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.083 [2024-11-26 04:16:17.796610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.083 [2024-11-26 04:16:17.796627] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.083 [2024-11-26 04:16:17.796692] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.083 [2024-11-26 04:16:17.796705] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.083 [2024-11-26 04:16:17.796718] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796723] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.083 [2024-11-26 04:16:17.796733] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796738] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796741] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.083 [2024-11-26 04:16:17.796747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.083 [2024-11-26 04:16:17.796766] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.083 [2024-11-26 04:16:17.796831] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.083 [2024-11-26 04:16:17.796837] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.083 [2024-11-26 04:16:17.796840] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796844] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.083 [2024-11-26 04:16:17.796853] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796857] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796860] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.083 [2024-11-26 04:16:17.796866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.083 [2024-11-26 04:16:17.796883] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.083 [2024-11-26 04:16:17.796950] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.083 [2024-11-26 04:16:17.796956] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.083 [2024-11-26 04:16:17.796960] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796963] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.083 [2024-11-26 04:16:17.796972] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796976] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.796979] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.083 [2024-11-26 04:16:17.796985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.083 [2024-11-26 04:16:17.797001] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.083 [2024-11-26 04:16:17.797065] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.083 [2024-11-26 04:16:17.797071] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.083 [2024-11-26 04:16:17.797074] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.797077] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.083 [2024-11-26 04:16:17.797087] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.797091] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.797094] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.083 [2024-11-26 04:16:17.797100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.083 [2024-11-26 04:16:17.797117] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.083 [2024-11-26 04:16:17.797183] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.083 [2024-11-26 04:16:17.797189] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.083 [2024-11-26 04:16:17.797192] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.083 [2024-11-26 04:16:17.797195] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.084 [2024-11-26 04:16:17.797204] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.797208] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.797212] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.084 [2024-11-26 04:16:17.797218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.084 [2024-11-26 04:16:17.797235] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.084 [2024-11-26 04:16:17.797292] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.084 [2024-11-26 04:16:17.797298] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.084 [2024-11-26 04:16:17.797301] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.797304] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.084 [2024-11-26 04:16:17.797313] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.797317] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.797321] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.084 [2024-11-26 04:16:17.797327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.084 [2024-11-26 04:16:17.797343] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.084 [2024-11-26 04:16:17.797401] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.084 [2024-11-26 04:16:17.797407] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.084 [2024-11-26 04:16:17.797410] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.797413] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.084 [2024-11-26 04:16:17.797422] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.797426] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.797429] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.084 [2024-11-26 04:16:17.797436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.084 [2024-11-26 04:16:17.797452] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.084 [2024-11-26 04:16:17.797520] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.084 [2024-11-26 04:16:17.797525] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.084 [2024-11-26 04:16:17.797529] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.797532] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.084 [2024-11-26 04:16:17.797541] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.797545] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.797548] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.084 [2024-11-26 04:16:17.797554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.084 [2024-11-26 04:16:17.797571] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.084 [2024-11-26 04:16:17.797633] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.084 [2024-11-26 04:16:17.797638] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.084 [2024-11-26 04:16:17.797642] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.797645] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.084 [2024-11-26 04:16:17.797654] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.797658] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.797661] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.084 [2024-11-26 04:16:17.797667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.084 [2024-11-26 04:16:17.797684] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.084 [2024-11-26 04:16:17.797752] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.084 [2024-11-26 04:16:17.797759] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.084 [2024-11-26 04:16:17.797762] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.797766] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.084 [2024-11-26 04:16:17.797775] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.797779] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.797782] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.084 [2024-11-26 04:16:17.797789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.084 [2024-11-26 04:16:17.797806] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.084 [2024-11-26 04:16:17.797871] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.084 [2024-11-26 04:16:17.797877] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.084 [2024-11-26 04:16:17.797881] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.797884] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.084 [2024-11-26 04:16:17.797893] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.797897] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.797900] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.084 [2024-11-26 04:16:17.797906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.084 [2024-11-26 04:16:17.797922] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.084 [2024-11-26 04:16:17.797981] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.084 [2024-11-26 04:16:17.797987] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.084 [2024-11-26 04:16:17.797990] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.798018] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.084 [2024-11-26 04:16:17.798029] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.798033] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.798036] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.084 [2024-11-26 04:16:17.798043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.084 [2024-11-26 04:16:17.798061] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.084 [2024-11-26 04:16:17.798127] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.084 [2024-11-26 04:16:17.798139] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.084 [2024-11-26 04:16:17.798143] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.798147] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.084 [2024-11-26 04:16:17.798157] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.798161] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.798165] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.084 [2024-11-26 04:16:17.798171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.084 [2024-11-26 04:16:17.798189] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.084 [2024-11-26 04:16:17.798249] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.084 [2024-11-26 04:16:17.798256] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.084 [2024-11-26 04:16:17.798259] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.798263] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.084 [2024-11-26 04:16:17.798272] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.798276] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.798280] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.084 [2024-11-26 04:16:17.798286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.084 [2024-11-26 04:16:17.798303] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.084 [2024-11-26 04:16:17.798381] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.084 [2024-11-26 04:16:17.798388] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.084 [2024-11-26 04:16:17.798391] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.798409] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.084 [2024-11-26 04:16:17.798418] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.798423] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.798426] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.084 [2024-11-26 04:16:17.798432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.084 [2024-11-26 04:16:17.798449] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.084 [2024-11-26 04:16:17.798514] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.084 [2024-11-26 04:16:17.798519] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.084 [2024-11-26 04:16:17.798523] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.798526] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.084 [2024-11-26 04:16:17.798535] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.798539] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.798542] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.084 [2024-11-26 04:16:17.798548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.084 [2024-11-26 04:16:17.798565] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.084 [2024-11-26 04:16:17.798634] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.084 [2024-11-26 04:16:17.798640] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.084 [2024-11-26 04:16:17.798643] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.084 [2024-11-26 04:16:17.798646] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.085 [2024-11-26 04:16:17.798655] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.085 [2024-11-26 04:16:17.798659] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.085 [2024-11-26 04:16:17.798663] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.085 [2024-11-26 04:16:17.798669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.085 [2024-11-26 04:16:17.798685] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.085 [2024-11-26 04:16:17.802756] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.085 [2024-11-26 04:16:17.802773] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.085 [2024-11-26 04:16:17.802777] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.085 [2024-11-26 04:16:17.802781] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.085 [2024-11-26 04:16:17.802793] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.085 [2024-11-26 04:16:17.802798] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.085 [2024-11-26 04:16:17.802802] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d9510) 00:20:16.085 [2024-11-26 04:16:17.802809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.085 [2024-11-26 04:16:17.802832] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa25cc0, cid 3, qid 0 00:20:16.085 [2024-11-26 04:16:17.802902] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.085 [2024-11-26 04:16:17.802909] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.085 [2024-11-26 04:16:17.802912] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.085 [2024-11-26 04:16:17.802915] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa25cc0) on tqpair=0x9d9510 00:20:16.085 [2024-11-26 04:16:17.802922] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:20:16.085 00:20:16.085 04:16:17 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:16.347 [2024-11-26 04:16:17.838571] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:16.347 [2024-11-26 04:16:17.838635] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93678 ] 00:20:16.347 [2024-11-26 04:16:17.975315] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:16.347 [2024-11-26 04:16:17.975375] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:16.347 [2024-11-26 04:16:17.975381] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:16.347 [2024-11-26 04:16:17.975390] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:16.347 [2024-11-26 04:16:17.975399] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:16.347 [2024-11-26 04:16:17.975496] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:16.347 [2024-11-26 04:16:17.975540] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1513510 0 00:20:16.347 [2024-11-26 04:16:17.982758] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:16.347 [2024-11-26 04:16:17.982776] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:16.347 [2024-11-26 04:16:17.982793] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:16.347 [2024-11-26 04:16:17.982797] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:16.347 [2024-11-26 04:16:17.982838] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.347 [2024-11-26 04:16:17.982844] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.347 [2024-11-26 04:16:17.982847] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1513510) 00:20:16.347 [2024-11-26 04:16:17.982858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:16.347 [2024-11-26 04:16:17.982886] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155f8a0, cid 0, qid 0 00:20:16.347 [2024-11-26 04:16:17.990760] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.347 [2024-11-26 04:16:17.990777] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.347 [2024-11-26 04:16:17.990781] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.347 [2024-11-26 04:16:17.990785] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155f8a0) on tqpair=0x1513510 00:20:16.347 [2024-11-26 04:16:17.990804] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:16.347 [2024-11-26 04:16:17.990810] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:16.347 [2024-11-26 04:16:17.990815] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:16.347 [2024-11-26 04:16:17.990828] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.347 [2024-11-26 04:16:17.990832] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.347 [2024-11-26 04:16:17.990836] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1513510) 00:20:16.347 [2024-11-26 04:16:17.990843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.347 [2024-11-26 04:16:17.990869] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155f8a0, cid 0, qid 0 00:20:16.347 [2024-11-26 04:16:17.990939] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.347 [2024-11-26 04:16:17.990945] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.347 [2024-11-26 04:16:17.990948] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.347 [2024-11-26 04:16:17.990951] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155f8a0) on tqpair=0x1513510 00:20:16.347 [2024-11-26 04:16:17.990957] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:16.347 [2024-11-26 04:16:17.990963] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:16.347 [2024-11-26 04:16:17.990970] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.347 [2024-11-26 04:16:17.990974] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.347 [2024-11-26 04:16:17.990977] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1513510) 00:20:16.347 [2024-11-26 04:16:17.990984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.347 [2024-11-26 04:16:17.991000] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155f8a0, cid 0, qid 0 00:20:16.347 [2024-11-26 04:16:17.991074] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.347 [2024-11-26 04:16:17.991080] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.347 [2024-11-26 04:16:17.991083] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.347 [2024-11-26 04:16:17.991086] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155f8a0) on tqpair=0x1513510 00:20:16.347 [2024-11-26 04:16:17.991092] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:16.347 [2024-11-26 04:16:17.991112] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:16.347 [2024-11-26 04:16:17.991118] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.347 [2024-11-26 04:16:17.991122] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.347 [2024-11-26 04:16:17.991125] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1513510) 00:20:16.347 [2024-11-26 04:16:17.991131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.347 [2024-11-26 04:16:17.991147] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155f8a0, cid 0, qid 0 00:20:16.347 [2024-11-26 04:16:17.991211] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.347 [2024-11-26 04:16:17.991216] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.347 [2024-11-26 04:16:17.991219] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.347 [2024-11-26 04:16:17.991223] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155f8a0) on tqpair=0x1513510 00:20:16.347 [2024-11-26 04:16:17.991228] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:16.347 [2024-11-26 04:16:17.991237] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.347 [2024-11-26 04:16:17.991241] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.347 [2024-11-26 04:16:17.991244] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1513510) 00:20:16.347 [2024-11-26 04:16:17.991250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.347 [2024-11-26 04:16:17.991274] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155f8a0, cid 0, qid 0 00:20:16.347 [2024-11-26 04:16:17.991335] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.347 [2024-11-26 04:16:17.991341] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.347 [2024-11-26 04:16:17.991344] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.347 [2024-11-26 04:16:17.991347] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155f8a0) on tqpair=0x1513510 00:20:16.347 [2024-11-26 04:16:17.991352] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:16.347 [2024-11-26 04:16:17.991356] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:16.347 [2024-11-26 04:16:17.991363] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:16.347 [2024-11-26 04:16:17.991468] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:16.347 [2024-11-26 04:16:17.991472] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:16.347 [2024-11-26 04:16:17.991479] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.347 [2024-11-26 04:16:17.991482] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.347 [2024-11-26 04:16:17.991486] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1513510) 00:20:16.347 [2024-11-26 04:16:17.991492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.347 [2024-11-26 04:16:17.991508] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155f8a0, cid 0, qid 0 00:20:16.347 [2024-11-26 04:16:17.991567] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.347 [2024-11-26 04:16:17.991573] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.347 [2024-11-26 04:16:17.991576] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.347 [2024-11-26 04:16:17.991579] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155f8a0) on tqpair=0x1513510 00:20:16.347 [2024-11-26 04:16:17.991585] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:16.347 [2024-11-26 04:16:17.991593] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.347 [2024-11-26 04:16:17.991597] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.347 [2024-11-26 04:16:17.991600] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1513510) 00:20:16.348 [2024-11-26 04:16:17.991606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.348 [2024-11-26 04:16:17.991622] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155f8a0, cid 0, qid 0 00:20:16.348 [2024-11-26 04:16:17.991682] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.348 [2024-11-26 04:16:17.991688] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.348 [2024-11-26 04:16:17.991691] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.991694] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155f8a0) on tqpair=0x1513510 00:20:16.348 [2024-11-26 04:16:17.991699] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:16.348 [2024-11-26 04:16:17.991703] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:16.348 [2024-11-26 04:16:17.991721] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:16.348 [2024-11-26 04:16:17.991736] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:16.348 [2024-11-26 04:16:17.991744] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.991748] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.991752] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1513510) 00:20:16.348 [2024-11-26 04:16:17.991759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.348 [2024-11-26 04:16:17.991778] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155f8a0, cid 0, qid 0 00:20:16.348 [2024-11-26 04:16:17.991878] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:16.348 [2024-11-26 04:16:17.991884] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:16.348 [2024-11-26 04:16:17.991888] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.991891] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1513510): datao=0, datal=4096, cccid=0 00:20:16.348 [2024-11-26 04:16:17.991895] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x155f8a0) on tqpair(0x1513510): expected_datao=0, payload_size=4096 00:20:16.348 [2024-11-26 04:16:17.991902] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.991905] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.991912] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.348 [2024-11-26 04:16:17.991917] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.348 [2024-11-26 04:16:17.991920] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.991923] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155f8a0) on tqpair=0x1513510 00:20:16.348 [2024-11-26 04:16:17.991931] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:16.348 [2024-11-26 04:16:17.991936] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:16.348 [2024-11-26 04:16:17.991940] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:16.348 [2024-11-26 04:16:17.991944] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:16.348 [2024-11-26 04:16:17.991947] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:16.348 [2024-11-26 04:16:17.991952] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:16.348 [2024-11-26 04:16:17.991964] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:16.348 [2024-11-26 04:16:17.991971] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.991974] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.991977] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1513510) 00:20:16.348 [2024-11-26 04:16:17.991984] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:16.348 [2024-11-26 04:16:17.992004] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155f8a0, cid 0, qid 0 00:20:16.348 [2024-11-26 04:16:17.992066] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.348 [2024-11-26 04:16:17.992072] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.348 [2024-11-26 04:16:17.992075] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.992079] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155f8a0) on tqpair=0x1513510 00:20:16.348 [2024-11-26 04:16:17.992086] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.992089] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.992092] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1513510) 00:20:16.348 [2024-11-26 04:16:17.992098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:16.348 [2024-11-26 04:16:17.992103] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.992107] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.992110] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1513510) 00:20:16.348 [2024-11-26 04:16:17.992115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:16.348 [2024-11-26 04:16:17.992120] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.992123] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.992126] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1513510) 00:20:16.348 [2024-11-26 04:16:17.992131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:16.348 [2024-11-26 04:16:17.992136] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.992139] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.992141] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1513510) 00:20:16.348 [2024-11-26 04:16:17.992146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:16.348 [2024-11-26 04:16:17.992150] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:16.348 [2024-11-26 04:16:17.992161] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:16.348 [2024-11-26 04:16:17.992168] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.992171] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.992175] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1513510) 00:20:16.348 [2024-11-26 04:16:17.992181] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.348 [2024-11-26 04:16:17.992199] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155f8a0, cid 0, qid 0 00:20:16.348 [2024-11-26 04:16:17.992205] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155fa00, cid 1, qid 0 00:20:16.348 [2024-11-26 04:16:17.992209] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155fb60, cid 2, qid 0 00:20:16.348 [2024-11-26 04:16:17.992213] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155fcc0, cid 3, qid 0 00:20:16.348 [2024-11-26 04:16:17.992217] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155fe20, cid 4, qid 0 00:20:16.348 [2024-11-26 04:16:17.992310] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.348 [2024-11-26 04:16:17.992316] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.348 [2024-11-26 04:16:17.992319] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.992322] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155fe20) on tqpair=0x1513510 00:20:16.348 [2024-11-26 04:16:17.992327] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:16.348 [2024-11-26 04:16:17.992331] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:16.348 [2024-11-26 04:16:17.992339] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:16.348 [2024-11-26 04:16:17.992349] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:16.348 [2024-11-26 04:16:17.992355] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.992359] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.992362] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1513510) 00:20:16.348 [2024-11-26 04:16:17.992368] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:16.348 [2024-11-26 04:16:17.992385] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155fe20, cid 4, qid 0 00:20:16.348 [2024-11-26 04:16:17.992449] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.348 [2024-11-26 04:16:17.992455] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.348 [2024-11-26 04:16:17.992458] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.992461] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155fe20) on tqpair=0x1513510 00:20:16.348 [2024-11-26 04:16:17.992511] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:16.348 [2024-11-26 04:16:17.992520] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:16.348 [2024-11-26 04:16:17.992527] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.992530] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.992533] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1513510) 00:20:16.348 [2024-11-26 04:16:17.992540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.348 [2024-11-26 04:16:17.992556] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155fe20, cid 4, qid 0 00:20:16.348 [2024-11-26 04:16:17.992623] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:16.348 [2024-11-26 04:16:17.992629] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:16.348 [2024-11-26 04:16:17.992632] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:16.348 [2024-11-26 04:16:17.992635] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1513510): datao=0, datal=4096, cccid=4 00:20:16.348 [2024-11-26 04:16:17.992639] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x155fe20) on tqpair(0x1513510): expected_datao=0, payload_size=4096 00:20:16.348 [2024-11-26 04:16:17.992646] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.992649] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.992656] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.349 [2024-11-26 04:16:17.992662] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.349 [2024-11-26 04:16:17.992665] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.992668] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155fe20) on tqpair=0x1513510 00:20:16.349 [2024-11-26 04:16:17.992683] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:16.349 [2024-11-26 04:16:17.992694] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:16.349 [2024-11-26 04:16:17.992703] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:16.349 [2024-11-26 04:16:17.992721] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.992726] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.992729] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1513510) 00:20:16.349 [2024-11-26 04:16:17.992736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.349 [2024-11-26 04:16:17.992755] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155fe20, cid 4, qid 0 00:20:16.349 [2024-11-26 04:16:17.992829] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:16.349 [2024-11-26 04:16:17.992835] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:16.349 [2024-11-26 04:16:17.992839] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.992842] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1513510): datao=0, datal=4096, cccid=4 00:20:16.349 [2024-11-26 04:16:17.992846] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x155fe20) on tqpair(0x1513510): expected_datao=0, payload_size=4096 00:20:16.349 [2024-11-26 04:16:17.992852] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.992856] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.992862] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.349 [2024-11-26 04:16:17.992868] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.349 [2024-11-26 04:16:17.992870] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.992874] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155fe20) on tqpair=0x1513510 00:20:16.349 [2024-11-26 04:16:17.992889] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:16.349 [2024-11-26 04:16:17.992899] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:16.349 [2024-11-26 04:16:17.992906] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.992910] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.992913] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1513510) 00:20:16.349 [2024-11-26 04:16:17.992919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.349 [2024-11-26 04:16:17.992937] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155fe20, cid 4, qid 0 00:20:16.349 [2024-11-26 04:16:17.993003] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:16.349 [2024-11-26 04:16:17.993008] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:16.349 [2024-11-26 04:16:17.993011] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.993014] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1513510): datao=0, datal=4096, cccid=4 00:20:16.349 [2024-11-26 04:16:17.993018] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x155fe20) on tqpair(0x1513510): expected_datao=0, payload_size=4096 00:20:16.349 [2024-11-26 04:16:17.993025] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.993028] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.993035] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.349 [2024-11-26 04:16:17.993040] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.349 [2024-11-26 04:16:17.993043] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.993046] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155fe20) on tqpair=0x1513510 00:20:16.349 [2024-11-26 04:16:17.993056] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:16.349 [2024-11-26 04:16:17.993064] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:16.349 [2024-11-26 04:16:17.993073] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:16.349 [2024-11-26 04:16:17.993079] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:16.349 [2024-11-26 04:16:17.993092] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:16.349 [2024-11-26 04:16:17.993097] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:16.349 [2024-11-26 04:16:17.993101] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:16.349 [2024-11-26 04:16:17.993106] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:16.349 [2024-11-26 04:16:17.993119] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.993123] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.993126] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1513510) 00:20:16.349 [2024-11-26 04:16:17.993133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.349 [2024-11-26 04:16:17.993139] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.993142] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.993145] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1513510) 00:20:16.349 [2024-11-26 04:16:17.993150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:16.349 [2024-11-26 04:16:17.993173] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155fe20, cid 4, qid 0 00:20:16.349 [2024-11-26 04:16:17.993179] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155ff80, cid 5, qid 0 00:20:16.349 [2024-11-26 04:16:17.993252] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.349 [2024-11-26 04:16:17.993258] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.349 [2024-11-26 04:16:17.993261] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.993265] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155fe20) on tqpair=0x1513510 00:20:16.349 [2024-11-26 04:16:17.993272] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.349 [2024-11-26 04:16:17.993276] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.349 [2024-11-26 04:16:17.993279] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.993283] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155ff80) on tqpair=0x1513510 00:20:16.349 [2024-11-26 04:16:17.993292] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.993296] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.993299] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1513510) 00:20:16.349 [2024-11-26 04:16:17.993305] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.349 [2024-11-26 04:16:17.993321] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155ff80, cid 5, qid 0 00:20:16.349 [2024-11-26 04:16:17.993374] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.349 [2024-11-26 04:16:17.993380] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.349 [2024-11-26 04:16:17.993383] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.993386] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155ff80) on tqpair=0x1513510 00:20:16.349 [2024-11-26 04:16:17.993395] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.993399] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.993402] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1513510) 00:20:16.349 [2024-11-26 04:16:17.993409] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.349 [2024-11-26 04:16:17.993424] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155ff80, cid 5, qid 0 00:20:16.349 [2024-11-26 04:16:17.993477] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.349 [2024-11-26 04:16:17.993483] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.349 [2024-11-26 04:16:17.993486] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.993489] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155ff80) on tqpair=0x1513510 00:20:16.349 [2024-11-26 04:16:17.993499] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.993502] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.993505] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1513510) 00:20:16.349 [2024-11-26 04:16:17.993511] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.349 [2024-11-26 04:16:17.993526] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155ff80, cid 5, qid 0 00:20:16.349 [2024-11-26 04:16:17.993583] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.349 [2024-11-26 04:16:17.993588] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.349 [2024-11-26 04:16:17.993591] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.993595] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155ff80) on tqpair=0x1513510 00:20:16.349 [2024-11-26 04:16:17.993606] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.993611] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.349 [2024-11-26 04:16:17.993614] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1513510) 00:20:16.349 [2024-11-26 04:16:17.993620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.349 [2024-11-26 04:16:17.993627] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.350 [2024-11-26 04:16:17.993630] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.350 [2024-11-26 04:16:17.993633] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1513510) 00:20:16.350 [2024-11-26 04:16:17.993638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.350 [2024-11-26 04:16:17.993644] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.350 [2024-11-26 04:16:17.993647] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.350 [2024-11-26 04:16:17.993651] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1513510) 00:20:16.350 [2024-11-26 04:16:17.993656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.350 [2024-11-26 04:16:17.993662] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.350 [2024-11-26 04:16:17.993665] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.350 [2024-11-26 04:16:17.993668] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1513510) 00:20:16.350 [2024-11-26 04:16:17.993673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.350 [2024-11-26 04:16:17.993691] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155ff80, cid 5, qid 0 00:20:16.350 [2024-11-26 04:16:17.993697] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155fe20, cid 4, qid 0 00:20:16.350 [2024-11-26 04:16:17.993701] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15600e0, cid 6, qid 0 00:20:16.350 [2024-11-26 04:16:17.993705] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1560240, cid 7, qid 0 00:20:16.350 [2024-11-26 04:16:17.993865] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:16.350 [2024-11-26 04:16:17.993872] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:16.350 [2024-11-26 04:16:17.993876] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:16.350 [2024-11-26 04:16:17.993879] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1513510): datao=0, datal=8192, cccid=5 00:20:16.350 [2024-11-26 04:16:17.993883] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x155ff80) on tqpair(0x1513510): expected_datao=0, payload_size=8192 00:20:16.350 [2024-11-26 04:16:17.993897] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:16.350 [2024-11-26 04:16:17.993901] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:16.350 [2024-11-26 04:16:17.993905] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:16.350 [2024-11-26 04:16:17.993910] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:16.350 [2024-11-26 04:16:17.993913] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:16.350 [2024-11-26 04:16:17.993916] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1513510): datao=0, datal=512, cccid=4 00:20:16.350 [2024-11-26 04:16:17.993920] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x155fe20) on tqpair(0x1513510): expected_datao=0, payload_size=512 00:20:16.350 [2024-11-26 04:16:17.993926] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:16.350 [2024-11-26 04:16:17.993929] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:16.350 [2024-11-26 04:16:17.993933] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:16.350 [2024-11-26 04:16:17.993938] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:16.350 [2024-11-26 04:16:17.993941] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:16.350 [2024-11-26 04:16:17.993945] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1513510): datao=0, datal=512, cccid=6 00:20:16.350 [2024-11-26 04:16:17.993949] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15600e0) on tqpair(0x1513510): expected_datao=0, payload_size=512 00:20:16.350 [2024-11-26 04:16:17.993955] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:16.350 [2024-11-26 04:16:17.993957] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:16.350 [2024-11-26 04:16:17.993962] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:16.350 [2024-11-26 04:16:17.993967] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:16.350 [2024-11-26 04:16:17.993969] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:16.350 ===================================================== 00:20:16.350 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:16.350 ===================================================== 00:20:16.350 Controller Capabilities/Features 00:20:16.350 ================================ 00:20:16.350 Vendor ID: 8086 00:20:16.350 Subsystem Vendor ID: 8086 00:20:16.350 Serial Number: SPDK00000000000001 00:20:16.350 Model Number: SPDK bdev Controller 00:20:16.350 Firmware Version: 24.01.1 00:20:16.350 Recommended Arb Burst: 6 00:20:16.350 IEEE OUI Identifier: e4 d2 5c 00:20:16.350 Multi-path I/O 00:20:16.350 May have multiple subsystem ports: Yes 00:20:16.350 May have multiple controllers: Yes 00:20:16.350 Associated with SR-IOV VF: No 00:20:16.350 Max Data Transfer Size: 131072 00:20:16.350 Max Number of Namespaces: 32 00:20:16.350 Max Number of I/O Queues: 127 00:20:16.350 NVMe Specification Version (VS): 1.3 00:20:16.350 NVMe Specification Version (Identify): 1.3 00:20:16.350 Maximum Queue Entries: 128 00:20:16.350 Contiguous Queues Required: Yes 00:20:16.350 Arbitration Mechanisms Supported 00:20:16.350 Weighted Round Robin: Not Supported 00:20:16.350 Vendor Specific: Not Supported 00:20:16.350 Reset Timeout: 15000 ms 00:20:16.350 Doorbell Stride: 4 bytes 00:20:16.350 NVM Subsystem Reset: Not Supported 00:20:16.350 Command Sets Supported 00:20:16.350 NVM Command Set: Supported 00:20:16.350 Boot Partition: Not Supported 00:20:16.350 Memory Page Size Minimum: 4096 bytes 00:20:16.350 Memory Page Size Maximum: 4096 bytes 00:20:16.350 Persistent Memory Region: Not Supported 00:20:16.350 Optional Asynchronous Events Supported 00:20:16.350 Namespace Attribute Notices: Supported 00:20:16.350 Firmware Activation Notices: Not Supported 00:20:16.350 ANA Change Notices: Not Supported 00:20:16.350 PLE Aggregate Log Change Notices: Not Supported 00:20:16.350 LBA Status Info Alert Notices: Not Supported 00:20:16.350 EGE Aggregate Log Change Notices: Not Supported 00:20:16.350 Normal NVM Subsystem Shutdown event: Not Supported 00:20:16.350 Zone Descriptor Change Notices: Not Supported 00:20:16.350 Discovery Log Change Notices: Not Supported 00:20:16.350 Controller Attributes 00:20:16.350 128-bit Host Identifier: Supported 00:20:16.350 Non-Operational Permissive Mode: Not Supported 00:20:16.350 NVM Sets: Not Supported 00:20:16.350 Read Recovery Levels: Not Supported 00:20:16.350 Endurance Groups: Not Supported 00:20:16.350 Predictable Latency Mode: Not Supported 00:20:16.350 Traffic Based Keep ALive: Not Supported 00:20:16.350 Namespace Granularity: Not Supported 00:20:16.350 SQ Associations: Not Supported 00:20:16.350 UUID List: Not Supported 00:20:16.350 Multi-Domain Subsystem: Not Supported 00:20:16.350 Fixed Capacity Management: Not Supported 00:20:16.350 Variable Capacity Management: Not Supported 00:20:16.350 Delete Endurance Group: Not Supported 00:20:16.350 Delete NVM Set: Not Supported 00:20:16.350 Extended LBA Formats Supported: Not Supported 00:20:16.350 Flexible Data Placement Supported: Not Supported 00:20:16.350 00:20:16.350 Controller Memory Buffer Support 00:20:16.350 ================================ 00:20:16.350 Supported: No 00:20:16.350 00:20:16.350 Persistent Memory Region Support 00:20:16.350 ================================ 00:20:16.350 Supported: No 00:20:16.350 00:20:16.350 Admin Command Set Attributes 00:20:16.350 ============================ 00:20:16.350 Security Send/Receive: Not Supported 00:20:16.350 Format NVM: Not Supported 00:20:16.350 Firmware Activate/Download: Not Supported 00:20:16.350 Namespace Management: Not Supported 00:20:16.350 Device Self-Test: Not Supported 00:20:16.350 Directives: Not Supported 00:20:16.350 NVMe-MI: Not Supported 00:20:16.350 Virtualization Management: Not Supported 00:20:16.350 Doorbell Buffer Config: Not Supported 00:20:16.350 Get LBA Status Capability: Not Supported 00:20:16.350 Command & Feature Lockdown Capability: Not Supported 00:20:16.350 Abort Command Limit: 4 00:20:16.350 Async Event Request Limit: 4 00:20:16.350 Number of Firmware Slots: N/A 00:20:16.350 Firmware Slot 1 Read-Only: N/A 00:20:16.350 Firmware Activation Without Reset: [2024-11-26 04:16:17.993972] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1513510): datao=0, datal=4096, cccid=7 00:20:16.350 [2024-11-26 04:16:17.993976] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1560240) on tqpair(0x1513510): expected_datao=0, payload_size=4096 00:20:16.350 [2024-11-26 04:16:17.993982] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:16.350 [2024-11-26 04:16:17.993985] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:16.350 [2024-11-26 04:16:17.994000] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.350 [2024-11-26 04:16:17.994022] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.350 [2024-11-26 04:16:17.994026] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.350 [2024-11-26 04:16:17.994029] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155ff80) on tqpair=0x1513510 00:20:16.350 [2024-11-26 04:16:17.994046] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.350 [2024-11-26 04:16:17.994052] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.350 [2024-11-26 04:16:17.994055] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.350 [2024-11-26 04:16:17.994058] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155fe20) on tqpair=0x1513510 00:20:16.350 [2024-11-26 04:16:17.994069] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.350 [2024-11-26 04:16:17.994074] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.350 [2024-11-26 04:16:17.994078] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.350 [2024-11-26 04:16:17.994081] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15600e0) on tqpair=0x1513510 00:20:16.350 [2024-11-26 04:16:17.994088] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.351 [2024-11-26 04:16:17.994094] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.351 [2024-11-26 04:16:17.994097] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.351 [2024-11-26 04:16:17.994100] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1560240) on tqpair=0x1513510 00:20:16.351 N/A 00:20:16.351 Multiple Update Detection Support: N/A 00:20:16.351 Firmware Update Granularity: No Information Provided 00:20:16.351 Per-Namespace SMART Log: No 00:20:16.351 Asymmetric Namespace Access Log Page: Not Supported 00:20:16.351 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:16.351 Command Effects Log Page: Supported 00:20:16.351 Get Log Page Extended Data: Supported 00:20:16.351 Telemetry Log Pages: Not Supported 00:20:16.351 Persistent Event Log Pages: Not Supported 00:20:16.351 Supported Log Pages Log Page: May Support 00:20:16.351 Commands Supported & Effects Log Page: Not Supported 00:20:16.351 Feature Identifiers & Effects Log Page:May Support 00:20:16.351 NVMe-MI Commands & Effects Log Page: May Support 00:20:16.351 Data Area 4 for Telemetry Log: Not Supported 00:20:16.351 Error Log Page Entries Supported: 128 00:20:16.351 Keep Alive: Supported 00:20:16.351 Keep Alive Granularity: 10000 ms 00:20:16.351 00:20:16.351 NVM Command Set Attributes 00:20:16.351 ========================== 00:20:16.351 Submission Queue Entry Size 00:20:16.351 Max: 64 00:20:16.351 Min: 64 00:20:16.351 Completion Queue Entry Size 00:20:16.351 Max: 16 00:20:16.351 Min: 16 00:20:16.351 Number of Namespaces: 32 00:20:16.351 Compare Command: Supported 00:20:16.351 Write Uncorrectable Command: Not Supported 00:20:16.351 Dataset Management Command: Supported 00:20:16.351 Write Zeroes Command: Supported 00:20:16.351 Set Features Save Field: Not Supported 00:20:16.351 Reservations: Supported 00:20:16.351 Timestamp: Not Supported 00:20:16.351 Copy: Supported 00:20:16.351 Volatile Write Cache: Present 00:20:16.351 Atomic Write Unit (Normal): 1 00:20:16.351 Atomic Write Unit (PFail): 1 00:20:16.351 Atomic Compare & Write Unit: 1 00:20:16.351 Fused Compare & Write: Supported 00:20:16.351 Scatter-Gather List 00:20:16.351 SGL Command Set: Supported 00:20:16.351 SGL Keyed: Supported 00:20:16.351 SGL Bit Bucket Descriptor: Not Supported 00:20:16.351 SGL Metadata Pointer: Not Supported 00:20:16.351 Oversized SGL: Not Supported 00:20:16.351 SGL Metadata Address: Not Supported 00:20:16.351 SGL Offset: Supported 00:20:16.351 Transport SGL Data Block: Not Supported 00:20:16.351 Replay Protected Memory Block: Not Supported 00:20:16.351 00:20:16.351 Firmware Slot Information 00:20:16.351 ========================= 00:20:16.351 Active slot: 1 00:20:16.351 Slot 1 Firmware Revision: 24.01.1 00:20:16.351 00:20:16.351 00:20:16.351 Commands Supported and Effects 00:20:16.351 ============================== 00:20:16.351 Admin Commands 00:20:16.351 -------------- 00:20:16.351 Get Log Page (02h): Supported 00:20:16.351 Identify (06h): Supported 00:20:16.351 Abort (08h): Supported 00:20:16.351 Set Features (09h): Supported 00:20:16.351 Get Features (0Ah): Supported 00:20:16.351 Asynchronous Event Request (0Ch): Supported 00:20:16.351 Keep Alive (18h): Supported 00:20:16.351 I/O Commands 00:20:16.351 ------------ 00:20:16.351 Flush (00h): Supported LBA-Change 00:20:16.351 Write (01h): Supported LBA-Change 00:20:16.351 Read (02h): Supported 00:20:16.351 Compare (05h): Supported 00:20:16.351 Write Zeroes (08h): Supported LBA-Change 00:20:16.351 Dataset Management (09h): Supported LBA-Change 00:20:16.351 Copy (19h): Supported LBA-Change 00:20:16.351 Unknown (79h): Supported LBA-Change 00:20:16.351 Unknown (7Ah): Supported 00:20:16.351 00:20:16.351 Error Log 00:20:16.351 ========= 00:20:16.351 00:20:16.351 Arbitration 00:20:16.351 =========== 00:20:16.351 Arbitration Burst: 1 00:20:16.351 00:20:16.351 Power Management 00:20:16.351 ================ 00:20:16.351 Number of Power States: 1 00:20:16.351 Current Power State: Power State #0 00:20:16.351 Power State #0: 00:20:16.351 Max Power: 0.00 W 00:20:16.351 Non-Operational State: Operational 00:20:16.351 Entry Latency: Not Reported 00:20:16.351 Exit Latency: Not Reported 00:20:16.351 Relative Read Throughput: 0 00:20:16.351 Relative Read Latency: 0 00:20:16.351 Relative Write Throughput: 0 00:20:16.351 Relative Write Latency: 0 00:20:16.351 Idle Power: Not Reported 00:20:16.351 Active Power: Not Reported 00:20:16.351 Non-Operational Permissive Mode: Not Supported 00:20:16.351 00:20:16.351 Health Information 00:20:16.351 ================== 00:20:16.351 Critical Warnings: 00:20:16.351 Available Spare Space: OK 00:20:16.351 Temperature: OK 00:20:16.351 Device Reliability: OK 00:20:16.351 Read Only: No 00:20:16.351 Volatile Memory Backup: OK 00:20:16.351 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:16.351 Temperature Threshold: [2024-11-26 04:16:17.994208] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.351 [2024-11-26 04:16:17.994215] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.351 [2024-11-26 04:16:17.994218] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1513510) 00:20:16.351 [2024-11-26 04:16:17.994226] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.351 [2024-11-26 04:16:17.994248] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1560240, cid 7, qid 0 00:20:16.351 [2024-11-26 04:16:17.994321] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.351 [2024-11-26 04:16:17.994327] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.351 [2024-11-26 04:16:17.994331] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.351 [2024-11-26 04:16:17.994334] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1560240) on tqpair=0x1513510 00:20:16.351 [2024-11-26 04:16:17.994365] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:16.351 [2024-11-26 04:16:17.994377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.351 [2024-11-26 04:16:17.994383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.351 [2024-11-26 04:16:17.994390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.351 [2024-11-26 04:16:17.994395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.351 [2024-11-26 04:16:17.994417] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.351 [2024-11-26 04:16:17.994421] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.351 [2024-11-26 04:16:17.994424] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1513510) 00:20:16.351 [2024-11-26 04:16:17.994430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.351 [2024-11-26 04:16:17.994450] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155fcc0, cid 3, qid 0 00:20:16.351 [2024-11-26 04:16:17.994503] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.351 [2024-11-26 04:16:17.994514] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.351 [2024-11-26 04:16:17.994518] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.351 [2024-11-26 04:16:17.994521] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155fcc0) on tqpair=0x1513510 00:20:16.351 [2024-11-26 04:16:17.994529] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.351 [2024-11-26 04:16:17.994533] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.351 [2024-11-26 04:16:17.994536] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1513510) 00:20:16.351 [2024-11-26 04:16:17.994542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.351 [2024-11-26 04:16:17.994562] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155fcc0, cid 3, qid 0 00:20:16.352 [2024-11-26 04:16:17.994630] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.352 [2024-11-26 04:16:17.994641] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.352 [2024-11-26 04:16:17.994645] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.352 [2024-11-26 04:16:17.994661] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155fcc0) on tqpair=0x1513510 00:20:16.352 [2024-11-26 04:16:17.994666] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:16.352 [2024-11-26 04:16:17.994670] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:16.352 [2024-11-26 04:16:17.994679] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.352 [2024-11-26 04:16:17.994684] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.352 [2024-11-26 04:16:17.994687] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1513510) 00:20:16.352 [2024-11-26 04:16:17.994693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.352 [2024-11-26 04:16:17.998749] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155fcc0, cid 3, qid 0 00:20:16.352 [2024-11-26 04:16:17.998772] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.352 [2024-11-26 04:16:17.998779] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.352 [2024-11-26 04:16:17.998782] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.352 [2024-11-26 04:16:17.998786] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155fcc0) on tqpair=0x1513510 00:20:16.352 [2024-11-26 04:16:17.998800] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:16.352 [2024-11-26 04:16:17.998804] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:16.352 [2024-11-26 04:16:17.998807] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1513510) 00:20:16.352 [2024-11-26 04:16:17.998815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.352 [2024-11-26 04:16:17.998838] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x155fcc0, cid 3, qid 0 00:20:16.352 [2024-11-26 04:16:17.998907] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:16.352 [2024-11-26 04:16:17.998913] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:16.352 [2024-11-26 04:16:17.998916] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:16.352 [2024-11-26 04:16:17.998919] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x155fcc0) on tqpair=0x1513510 00:20:16.352 [2024-11-26 04:16:17.998927] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:20:16.352 0 Kelvin (-273 Celsius) 00:20:16.352 Available Spare: 0% 00:20:16.352 Available Spare Threshold: 0% 00:20:16.352 Life Percentage Used: 0% 00:20:16.352 Data Units Read: 0 00:20:16.352 Data Units Written: 0 00:20:16.352 Host Read Commands: 0 00:20:16.352 Host Write Commands: 0 00:20:16.352 Controller Busy Time: 0 minutes 00:20:16.352 Power Cycles: 0 00:20:16.352 Power On Hours: 0 hours 00:20:16.352 Unsafe Shutdowns: 0 00:20:16.352 Unrecoverable Media Errors: 0 00:20:16.352 Lifetime Error Log Entries: 0 00:20:16.352 Warning Temperature Time: 0 minutes 00:20:16.352 Critical Temperature Time: 0 minutes 00:20:16.352 00:20:16.352 Number of Queues 00:20:16.352 ================ 00:20:16.352 Number of I/O Submission Queues: 127 00:20:16.352 Number of I/O Completion Queues: 127 00:20:16.352 00:20:16.352 Active Namespaces 00:20:16.352 ================= 00:20:16.352 Namespace ID:1 00:20:16.352 Error Recovery Timeout: Unlimited 00:20:16.352 Command Set Identifier: NVM (00h) 00:20:16.352 Deallocate: Supported 00:20:16.352 Deallocated/Unwritten Error: Not Supported 00:20:16.352 Deallocated Read Value: Unknown 00:20:16.352 Deallocate in Write Zeroes: Not Supported 00:20:16.352 Deallocated Guard Field: 0xFFFF 00:20:16.352 Flush: Supported 00:20:16.352 Reservation: Supported 00:20:16.352 Namespace Sharing Capabilities: Multiple Controllers 00:20:16.352 Size (in LBAs): 131072 (0GiB) 00:20:16.352 Capacity (in LBAs): 131072 (0GiB) 00:20:16.352 Utilization (in LBAs): 131072 (0GiB) 00:20:16.352 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:16.352 EUI64: ABCDEF0123456789 00:20:16.352 UUID: 9ae6c627-5415-47a6-a024-6821b846832b 00:20:16.352 Thin Provisioning: Not Supported 00:20:16.352 Per-NS Atomic Units: Yes 00:20:16.352 Atomic Boundary Size (Normal): 0 00:20:16.352 Atomic Boundary Size (PFail): 0 00:20:16.352 Atomic Boundary Offset: 0 00:20:16.352 Maximum Single Source Range Length: 65535 00:20:16.352 Maximum Copy Length: 65535 00:20:16.352 Maximum Source Range Count: 1 00:20:16.352 NGUID/EUI64 Never Reused: No 00:20:16.352 Namespace Write Protected: No 00:20:16.352 Number of LBA Formats: 1 00:20:16.352 Current LBA Format: LBA Format #00 00:20:16.352 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:16.352 00:20:16.352 04:16:18 -- host/identify.sh@51 -- # sync 00:20:16.352 04:16:18 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:16.352 04:16:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.352 04:16:18 -- common/autotest_common.sh@10 -- # set +x 00:20:16.352 04:16:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.352 04:16:18 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:16.352 04:16:18 -- host/identify.sh@56 -- # nvmftestfini 00:20:16.352 04:16:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:16.352 04:16:18 -- nvmf/common.sh@116 -- # sync 00:20:16.352 04:16:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:16.352 04:16:18 -- nvmf/common.sh@119 -- # set +e 00:20:16.352 04:16:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:16.352 04:16:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:16.352 rmmod nvme_tcp 00:20:16.611 rmmod nvme_fabrics 00:20:16.612 rmmod nvme_keyring 00:20:16.612 04:16:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:16.612 04:16:18 -- nvmf/common.sh@123 -- # set -e 00:20:16.612 04:16:18 -- nvmf/common.sh@124 -- # return 0 00:20:16.612 04:16:18 -- nvmf/common.sh@477 -- # '[' -n 93622 ']' 00:20:16.612 04:16:18 -- nvmf/common.sh@478 -- # killprocess 93622 00:20:16.612 04:16:18 -- common/autotest_common.sh@936 -- # '[' -z 93622 ']' 00:20:16.612 04:16:18 -- common/autotest_common.sh@940 -- # kill -0 93622 00:20:16.612 04:16:18 -- common/autotest_common.sh@941 -- # uname 00:20:16.612 04:16:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:16.612 04:16:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93622 00:20:16.612 04:16:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:16.612 04:16:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:16.612 killing process with pid 93622 00:20:16.612 04:16:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93622' 00:20:16.612 04:16:18 -- common/autotest_common.sh@955 -- # kill 93622 00:20:16.612 [2024-11-26 04:16:18.178943] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:16.612 04:16:18 -- common/autotest_common.sh@960 -- # wait 93622 00:20:16.871 04:16:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:16.871 04:16:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:16.871 04:16:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:16.871 04:16:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:16.871 04:16:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:16.871 04:16:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.871 04:16:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.871 04:16:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.871 04:16:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:16.871 00:20:16.871 real 0m2.903s 00:20:16.871 user 0m7.960s 00:20:16.871 sys 0m0.805s 00:20:16.871 04:16:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:16.871 04:16:18 -- common/autotest_common.sh@10 -- # set +x 00:20:16.871 ************************************ 00:20:16.871 END TEST nvmf_identify 00:20:16.871 ************************************ 00:20:16.871 04:16:18 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:16.871 04:16:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:16.871 04:16:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:16.871 04:16:18 -- common/autotest_common.sh@10 -- # set +x 00:20:16.871 ************************************ 00:20:16.871 START TEST nvmf_perf 00:20:16.871 ************************************ 00:20:16.871 04:16:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:17.130 * Looking for test storage... 00:20:17.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:17.130 04:16:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:17.130 04:16:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:17.130 04:16:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:17.130 04:16:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:17.130 04:16:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:17.130 04:16:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:17.130 04:16:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:17.130 04:16:18 -- scripts/common.sh@335 -- # IFS=.-: 00:20:17.130 04:16:18 -- scripts/common.sh@335 -- # read -ra ver1 00:20:17.130 04:16:18 -- scripts/common.sh@336 -- # IFS=.-: 00:20:17.130 04:16:18 -- scripts/common.sh@336 -- # read -ra ver2 00:20:17.130 04:16:18 -- scripts/common.sh@337 -- # local 'op=<' 00:20:17.130 04:16:18 -- scripts/common.sh@339 -- # ver1_l=2 00:20:17.131 04:16:18 -- scripts/common.sh@340 -- # ver2_l=1 00:20:17.131 04:16:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:17.131 04:16:18 -- scripts/common.sh@343 -- # case "$op" in 00:20:17.131 04:16:18 -- scripts/common.sh@344 -- # : 1 00:20:17.131 04:16:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:17.131 04:16:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:17.131 04:16:18 -- scripts/common.sh@364 -- # decimal 1 00:20:17.131 04:16:18 -- scripts/common.sh@352 -- # local d=1 00:20:17.131 04:16:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:17.131 04:16:18 -- scripts/common.sh@354 -- # echo 1 00:20:17.131 04:16:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:17.131 04:16:18 -- scripts/common.sh@365 -- # decimal 2 00:20:17.131 04:16:18 -- scripts/common.sh@352 -- # local d=2 00:20:17.131 04:16:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:17.131 04:16:18 -- scripts/common.sh@354 -- # echo 2 00:20:17.131 04:16:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:17.131 04:16:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:17.131 04:16:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:17.131 04:16:18 -- scripts/common.sh@367 -- # return 0 00:20:17.131 04:16:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:17.131 04:16:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:17.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.131 --rc genhtml_branch_coverage=1 00:20:17.131 --rc genhtml_function_coverage=1 00:20:17.131 --rc genhtml_legend=1 00:20:17.131 --rc geninfo_all_blocks=1 00:20:17.131 --rc geninfo_unexecuted_blocks=1 00:20:17.131 00:20:17.131 ' 00:20:17.131 04:16:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:17.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.131 --rc genhtml_branch_coverage=1 00:20:17.131 --rc genhtml_function_coverage=1 00:20:17.131 --rc genhtml_legend=1 00:20:17.131 --rc geninfo_all_blocks=1 00:20:17.131 --rc geninfo_unexecuted_blocks=1 00:20:17.131 00:20:17.131 ' 00:20:17.131 04:16:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:17.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.131 --rc genhtml_branch_coverage=1 00:20:17.131 --rc genhtml_function_coverage=1 00:20:17.131 --rc genhtml_legend=1 00:20:17.131 --rc geninfo_all_blocks=1 00:20:17.131 --rc geninfo_unexecuted_blocks=1 00:20:17.131 00:20:17.131 ' 00:20:17.131 04:16:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:17.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.131 --rc genhtml_branch_coverage=1 00:20:17.131 --rc genhtml_function_coverage=1 00:20:17.131 --rc genhtml_legend=1 00:20:17.131 --rc geninfo_all_blocks=1 00:20:17.131 --rc geninfo_unexecuted_blocks=1 00:20:17.131 00:20:17.131 ' 00:20:17.131 04:16:18 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:17.131 04:16:18 -- nvmf/common.sh@7 -- # uname -s 00:20:17.131 04:16:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.131 04:16:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.131 04:16:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.131 04:16:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.131 04:16:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.131 04:16:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.131 04:16:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.131 04:16:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.131 04:16:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.131 04:16:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.131 04:16:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:20:17.131 04:16:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:20:17.131 04:16:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.131 04:16:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.131 04:16:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:17.131 04:16:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:17.131 04:16:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.131 04:16:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.131 04:16:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.131 04:16:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.131 04:16:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.131 04:16:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.131 04:16:18 -- paths/export.sh@5 -- # export PATH 00:20:17.131 04:16:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.131 04:16:18 -- nvmf/common.sh@46 -- # : 0 00:20:17.131 04:16:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:17.131 04:16:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:17.131 04:16:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:17.131 04:16:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.131 04:16:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.131 04:16:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:17.131 04:16:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:17.131 04:16:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:17.131 04:16:18 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:17.131 04:16:18 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:17.131 04:16:18 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:17.131 04:16:18 -- host/perf.sh@17 -- # nvmftestinit 00:20:17.131 04:16:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:17.131 04:16:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:17.131 04:16:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:17.131 04:16:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:17.131 04:16:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:17.131 04:16:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.131 04:16:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:17.131 04:16:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.131 04:16:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:17.131 04:16:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:17.131 04:16:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:17.131 04:16:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:17.131 04:16:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:17.131 04:16:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:17.131 04:16:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:17.131 04:16:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:17.131 04:16:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:17.131 04:16:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:17.131 04:16:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:17.131 04:16:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:17.131 04:16:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:17.131 04:16:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:17.131 04:16:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:17.131 04:16:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:17.131 04:16:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:17.131 04:16:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:17.131 04:16:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:17.131 04:16:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:17.131 Cannot find device "nvmf_tgt_br" 00:20:17.131 04:16:18 -- nvmf/common.sh@154 -- # true 00:20:17.131 04:16:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:17.131 Cannot find device "nvmf_tgt_br2" 00:20:17.131 04:16:18 -- nvmf/common.sh@155 -- # true 00:20:17.131 04:16:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:17.131 04:16:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:17.131 Cannot find device "nvmf_tgt_br" 00:20:17.131 04:16:18 -- nvmf/common.sh@157 -- # true 00:20:17.131 04:16:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:17.131 Cannot find device "nvmf_tgt_br2" 00:20:17.131 04:16:18 -- nvmf/common.sh@158 -- # true 00:20:17.131 04:16:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:17.131 04:16:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:17.390 04:16:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:17.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:17.390 04:16:18 -- nvmf/common.sh@161 -- # true 00:20:17.390 04:16:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:17.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:17.390 04:16:18 -- nvmf/common.sh@162 -- # true 00:20:17.390 04:16:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:17.390 04:16:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:17.390 04:16:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:17.390 04:16:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:17.390 04:16:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:17.390 04:16:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:17.390 04:16:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:17.390 04:16:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:17.390 04:16:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:17.390 04:16:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:17.390 04:16:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:17.390 04:16:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:17.390 04:16:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:17.390 04:16:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:17.390 04:16:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:17.390 04:16:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:17.390 04:16:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:17.390 04:16:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:17.390 04:16:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:17.390 04:16:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:17.390 04:16:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:17.390 04:16:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:17.390 04:16:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:17.390 04:16:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:17.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:20:17.390 00:20:17.390 --- 10.0.0.2 ping statistics --- 00:20:17.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.391 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:20:17.391 04:16:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:17.391 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:17.391 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:20:17.391 00:20:17.391 --- 10.0.0.3 ping statistics --- 00:20:17.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.391 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:20:17.391 04:16:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:17.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:20:17.391 00:20:17.391 --- 10.0.0.1 ping statistics --- 00:20:17.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.391 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:20:17.391 04:16:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.391 04:16:19 -- nvmf/common.sh@421 -- # return 0 00:20:17.391 04:16:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:17.391 04:16:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.391 04:16:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:17.391 04:16:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:17.391 04:16:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.391 04:16:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:17.391 04:16:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:17.391 04:16:19 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:17.391 04:16:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:17.391 04:16:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:17.391 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:20:17.649 04:16:19 -- nvmf/common.sh@469 -- # nvmfpid=93855 00:20:17.649 04:16:19 -- nvmf/common.sh@470 -- # waitforlisten 93855 00:20:17.649 04:16:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:17.649 04:16:19 -- common/autotest_common.sh@829 -- # '[' -z 93855 ']' 00:20:17.649 04:16:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.649 04:16:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:17.649 04:16:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.649 04:16:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:17.649 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:20:17.649 [2024-11-26 04:16:19.209624] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:17.649 [2024-11-26 04:16:19.209732] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.649 [2024-11-26 04:16:19.343145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:17.908 [2024-11-26 04:16:19.418369] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:17.908 [2024-11-26 04:16:19.418510] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.908 [2024-11-26 04:16:19.418522] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.908 [2024-11-26 04:16:19.418531] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.908 [2024-11-26 04:16:19.418688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.908 [2024-11-26 04:16:19.418821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.908 [2024-11-26 04:16:19.419430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:17.908 [2024-11-26 04:16:19.419478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.475 04:16:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.475 04:16:20 -- common/autotest_common.sh@862 -- # return 0 00:20:18.475 04:16:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:18.475 04:16:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:18.475 04:16:20 -- common/autotest_common.sh@10 -- # set +x 00:20:18.475 04:16:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.475 04:16:20 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:18.475 04:16:20 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:19.042 04:16:20 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:19.042 04:16:20 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:19.300 04:16:20 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:20:19.300 04:16:20 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:19.559 04:16:21 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:19.559 04:16:21 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:20:19.559 04:16:21 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:19.559 04:16:21 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:19.559 04:16:21 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:19.559 [2024-11-26 04:16:21.284504] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.559 04:16:21 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:19.817 04:16:21 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:19.817 04:16:21 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:20.076 04:16:21 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:20.076 04:16:21 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:20.334 04:16:22 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:20.592 [2024-11-26 04:16:22.206121] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.592 04:16:22 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:20.851 04:16:22 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:20.851 04:16:22 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:20.851 04:16:22 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:20.851 04:16:22 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:22.228 Initializing NVMe Controllers 00:20:22.228 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:20:22.228 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:20:22.228 Initialization complete. Launching workers. 00:20:22.228 ======================================================== 00:20:22.228 Latency(us) 00:20:22.228 Device Information : IOPS MiB/s Average min max 00:20:22.228 PCIE (0000:00:06.0) NSID 1 from core 0: 23617.37 92.26 1355.28 352.99 8050.18 00:20:22.228 ======================================================== 00:20:22.228 Total : 23617.37 92.26 1355.28 352.99 8050.18 00:20:22.228 00:20:22.228 04:16:23 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:23.604 Initializing NVMe Controllers 00:20:23.604 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:23.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:23.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:23.604 Initialization complete. Launching workers. 00:20:23.604 ======================================================== 00:20:23.604 Latency(us) 00:20:23.604 Device Information : IOPS MiB/s Average min max 00:20:23.604 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3531.20 13.79 282.88 103.48 7104.81 00:20:23.604 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 121.70 0.48 8216.90 4996.54 12018.93 00:20:23.604 ======================================================== 00:20:23.604 Total : 3652.90 14.27 547.21 103.48 12018.93 00:20:23.604 00:20:23.604 04:16:24 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:24.980 Initializing NVMe Controllers 00:20:24.980 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:24.980 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:24.980 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:24.980 Initialization complete. Launching workers. 00:20:24.980 ======================================================== 00:20:24.980 Latency(us) 00:20:24.980 Device Information : IOPS MiB/s Average min max 00:20:24.980 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10396.94 40.61 3079.36 601.58 9354.56 00:20:24.980 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2638.13 10.31 12254.75 5903.28 23911.36 00:20:24.980 ======================================================== 00:20:24.980 Total : 13035.07 50.92 4936.34 601.58 23911.36 00:20:24.980 00:20:24.980 04:16:26 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:24.980 04:16:26 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:27.515 Initializing NVMe Controllers 00:20:27.515 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:27.515 Controller IO queue size 128, less than required. 00:20:27.515 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:27.515 Controller IO queue size 128, less than required. 00:20:27.515 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:27.515 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:27.515 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:27.515 Initialization complete. Launching workers. 00:20:27.515 ======================================================== 00:20:27.515 Latency(us) 00:20:27.515 Device Information : IOPS MiB/s Average min max 00:20:27.515 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1731.79 432.95 74822.52 47915.44 137213.50 00:20:27.515 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 559.80 139.95 234826.45 100089.86 383759.32 00:20:27.515 ======================================================== 00:20:27.515 Total : 2291.60 572.90 113909.05 47915.44 383759.32 00:20:27.515 00:20:27.515 04:16:28 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:27.515 No valid NVMe controllers or AIO or URING devices found 00:20:27.515 Initializing NVMe Controllers 00:20:27.515 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:27.515 Controller IO queue size 128, less than required. 00:20:27.515 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:27.515 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:27.515 Controller IO queue size 128, less than required. 00:20:27.515 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:27.515 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:27.515 WARNING: Some requested NVMe devices were skipped 00:20:27.515 04:16:29 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:30.047 Initializing NVMe Controllers 00:20:30.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:30.047 Controller IO queue size 128, less than required. 00:20:30.047 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:30.047 Controller IO queue size 128, less than required. 00:20:30.047 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:30.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:30.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:30.047 Initialization complete. Launching workers. 00:20:30.047 00:20:30.047 ==================== 00:20:30.047 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:30.047 TCP transport: 00:20:30.047 polls: 8474 00:20:30.047 idle_polls: 6047 00:20:30.047 sock_completions: 2427 00:20:30.047 nvme_completions: 4828 00:20:30.047 submitted_requests: 7437 00:20:30.047 queued_requests: 1 00:20:30.047 00:20:30.047 ==================== 00:20:30.047 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:30.047 TCP transport: 00:20:30.047 polls: 10985 00:20:30.047 idle_polls: 8543 00:20:30.047 sock_completions: 2442 00:20:30.047 nvme_completions: 4772 00:20:30.047 submitted_requests: 7244 00:20:30.047 queued_requests: 1 00:20:30.047 ======================================================== 00:20:30.047 Latency(us) 00:20:30.047 Device Information : IOPS MiB/s Average min max 00:20:30.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1270.01 317.50 103995.81 69496.42 179680.76 00:20:30.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1256.02 314.00 102712.95 64247.06 166574.72 00:20:30.047 ======================================================== 00:20:30.047 Total : 2526.03 631.51 103357.93 64247.06 179680.76 00:20:30.047 00:20:30.047 04:16:31 -- host/perf.sh@66 -- # sync 00:20:30.047 04:16:31 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:30.305 04:16:31 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:30.305 04:16:31 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:20:30.305 04:16:31 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:30.564 04:16:32 -- host/perf.sh@72 -- # ls_guid=4a0ef2a1-318c-4943-9d11-2c932eecd8c1 00:20:30.564 04:16:32 -- host/perf.sh@73 -- # get_lvs_free_mb 4a0ef2a1-318c-4943-9d11-2c932eecd8c1 00:20:30.564 04:16:32 -- common/autotest_common.sh@1353 -- # local lvs_uuid=4a0ef2a1-318c-4943-9d11-2c932eecd8c1 00:20:30.564 04:16:32 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:30.564 04:16:32 -- common/autotest_common.sh@1355 -- # local fc 00:20:30.564 04:16:32 -- common/autotest_common.sh@1356 -- # local cs 00:20:30.564 04:16:32 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:30.822 04:16:32 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:30.822 { 00:20:30.822 "base_bdev": "Nvme0n1", 00:20:30.822 "block_size": 4096, 00:20:30.822 "cluster_size": 4194304, 00:20:30.822 "free_clusters": 1278, 00:20:30.822 "name": "lvs_0", 00:20:30.822 "total_data_clusters": 1278, 00:20:30.822 "uuid": "4a0ef2a1-318c-4943-9d11-2c932eecd8c1" 00:20:30.822 } 00:20:30.822 ]' 00:20:30.822 04:16:32 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="4a0ef2a1-318c-4943-9d11-2c932eecd8c1") .free_clusters' 00:20:30.822 04:16:32 -- common/autotest_common.sh@1358 -- # fc=1278 00:20:30.822 04:16:32 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="4a0ef2a1-318c-4943-9d11-2c932eecd8c1") .cluster_size' 00:20:30.822 04:16:32 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:30.822 04:16:32 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:20:30.822 04:16:32 -- common/autotest_common.sh@1363 -- # echo 5112 00:20:30.822 5112 00:20:30.822 04:16:32 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:30.822 04:16:32 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4a0ef2a1-318c-4943-9d11-2c932eecd8c1 lbd_0 5112 00:20:31.080 04:16:32 -- host/perf.sh@80 -- # lb_guid=1543ac2f-bd41-44c1-8384-badd2f197a95 00:20:31.080 04:16:32 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 1543ac2f-bd41-44c1-8384-badd2f197a95 lvs_n_0 00:20:31.647 04:16:33 -- host/perf.sh@83 -- # ls_nested_guid=510ad669-456e-4bfc-8666-1e1abeb5b15b 00:20:31.647 04:16:33 -- host/perf.sh@84 -- # get_lvs_free_mb 510ad669-456e-4bfc-8666-1e1abeb5b15b 00:20:31.647 04:16:33 -- common/autotest_common.sh@1353 -- # local lvs_uuid=510ad669-456e-4bfc-8666-1e1abeb5b15b 00:20:31.647 04:16:33 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:31.647 04:16:33 -- common/autotest_common.sh@1355 -- # local fc 00:20:31.647 04:16:33 -- common/autotest_common.sh@1356 -- # local cs 00:20:31.647 04:16:33 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:31.647 04:16:33 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:31.647 { 00:20:31.647 "base_bdev": "Nvme0n1", 00:20:31.647 "block_size": 4096, 00:20:31.647 "cluster_size": 4194304, 00:20:31.647 "free_clusters": 0, 00:20:31.647 "name": "lvs_0", 00:20:31.647 "total_data_clusters": 1278, 00:20:31.647 "uuid": "4a0ef2a1-318c-4943-9d11-2c932eecd8c1" 00:20:31.647 }, 00:20:31.647 { 00:20:31.647 "base_bdev": "1543ac2f-bd41-44c1-8384-badd2f197a95", 00:20:31.647 "block_size": 4096, 00:20:31.647 "cluster_size": 4194304, 00:20:31.647 "free_clusters": 1276, 00:20:31.647 "name": "lvs_n_0", 00:20:31.647 "total_data_clusters": 1276, 00:20:31.647 "uuid": "510ad669-456e-4bfc-8666-1e1abeb5b15b" 00:20:31.647 } 00:20:31.647 ]' 00:20:31.647 04:16:33 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="510ad669-456e-4bfc-8666-1e1abeb5b15b") .free_clusters' 00:20:31.906 04:16:33 -- common/autotest_common.sh@1358 -- # fc=1276 00:20:31.906 04:16:33 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="510ad669-456e-4bfc-8666-1e1abeb5b15b") .cluster_size' 00:20:31.906 04:16:33 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:31.906 04:16:33 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:20:31.906 04:16:33 -- common/autotest_common.sh@1363 -- # echo 5104 00:20:31.906 5104 00:20:31.906 04:16:33 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:31.906 04:16:33 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 510ad669-456e-4bfc-8666-1e1abeb5b15b lbd_nest_0 5104 00:20:32.165 04:16:33 -- host/perf.sh@88 -- # lb_nested_guid=8f3cab0e-aabd-494c-962c-0ce64896309c 00:20:32.165 04:16:33 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:32.423 04:16:33 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:32.423 04:16:33 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 8f3cab0e-aabd-494c-962c-0ce64896309c 00:20:32.423 04:16:34 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:32.682 04:16:34 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:32.682 04:16:34 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:32.682 04:16:34 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:32.682 04:16:34 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:32.682 04:16:34 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:32.941 No valid NVMe controllers or AIO or URING devices found 00:20:32.941 Initializing NVMe Controllers 00:20:32.941 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:32.941 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:32.941 WARNING: Some requested NVMe devices were skipped 00:20:32.941 04:16:34 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:32.941 04:16:34 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:45.148 Initializing NVMe Controllers 00:20:45.148 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:45.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:45.148 Initialization complete. Launching workers. 00:20:45.148 ======================================================== 00:20:45.148 Latency(us) 00:20:45.148 Device Information : IOPS MiB/s Average min max 00:20:45.148 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 868.52 108.57 1150.12 379.91 8624.59 00:20:45.148 ======================================================== 00:20:45.148 Total : 868.52 108.57 1150.12 379.91 8624.59 00:20:45.148 00:20:45.148 04:16:44 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:45.148 04:16:44 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:45.148 04:16:44 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:45.148 No valid NVMe controllers or AIO or URING devices found 00:20:45.148 Initializing NVMe Controllers 00:20:45.148 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:45.148 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:45.148 WARNING: Some requested NVMe devices were skipped 00:20:45.148 04:16:45 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:45.148 04:16:45 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:55.126 Initializing NVMe Controllers 00:20:55.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:55.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:55.126 Initialization complete. Launching workers. 00:20:55.126 ======================================================== 00:20:55.126 Latency(us) 00:20:55.126 Device Information : IOPS MiB/s Average min max 00:20:55.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 983.70 122.96 32562.73 7898.65 285605.10 00:20:55.126 ======================================================== 00:20:55.126 Total : 983.70 122.96 32562.73 7898.65 285605.10 00:20:55.126 00:20:55.126 04:16:55 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:55.126 04:16:55 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:55.126 04:16:55 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:55.126 No valid NVMe controllers or AIO or URING devices found 00:20:55.126 Initializing NVMe Controllers 00:20:55.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:55.126 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:55.126 WARNING: Some requested NVMe devices were skipped 00:20:55.126 04:16:55 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:55.126 04:16:55 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:05.107 Initializing NVMe Controllers 00:21:05.107 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:05.107 Controller IO queue size 128, less than required. 00:21:05.107 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:05.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:05.107 Initialization complete. Launching workers. 00:21:05.107 ======================================================== 00:21:05.107 Latency(us) 00:21:05.107 Device Information : IOPS MiB/s Average min max 00:21:05.107 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3748.06 468.51 34146.16 11375.32 74346.77 00:21:05.107 ======================================================== 00:21:05.107 Total : 3748.06 468.51 34146.16 11375.32 74346.77 00:21:05.107 00:21:05.107 04:17:06 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:05.107 04:17:06 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8f3cab0e-aabd-494c-962c-0ce64896309c 00:21:05.107 04:17:06 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:05.376 04:17:07 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1543ac2f-bd41-44c1-8384-badd2f197a95 00:21:05.704 04:17:07 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:06.005 04:17:07 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:06.005 04:17:07 -- host/perf.sh@114 -- # nvmftestfini 00:21:06.005 04:17:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:06.005 04:17:07 -- nvmf/common.sh@116 -- # sync 00:21:06.005 04:17:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:06.005 04:17:07 -- nvmf/common.sh@119 -- # set +e 00:21:06.005 04:17:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:06.005 04:17:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:06.005 rmmod nvme_tcp 00:21:06.005 rmmod nvme_fabrics 00:21:06.005 rmmod nvme_keyring 00:21:06.005 04:17:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:06.005 04:17:07 -- nvmf/common.sh@123 -- # set -e 00:21:06.005 04:17:07 -- nvmf/common.sh@124 -- # return 0 00:21:06.005 04:17:07 -- nvmf/common.sh@477 -- # '[' -n 93855 ']' 00:21:06.005 04:17:07 -- nvmf/common.sh@478 -- # killprocess 93855 00:21:06.005 04:17:07 -- common/autotest_common.sh@936 -- # '[' -z 93855 ']' 00:21:06.005 04:17:07 -- common/autotest_common.sh@940 -- # kill -0 93855 00:21:06.005 04:17:07 -- common/autotest_common.sh@941 -- # uname 00:21:06.005 04:17:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:06.005 04:17:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93855 00:21:06.005 killing process with pid 93855 00:21:06.005 04:17:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:06.005 04:17:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:06.005 04:17:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93855' 00:21:06.005 04:17:07 -- common/autotest_common.sh@955 -- # kill 93855 00:21:06.005 04:17:07 -- common/autotest_common.sh@960 -- # wait 93855 00:21:06.290 04:17:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:06.290 04:17:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:06.290 04:17:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:06.290 04:17:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:06.290 04:17:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:06.290 04:17:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.290 04:17:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.290 04:17:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.290 04:17:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:06.550 00:21:06.550 real 0m49.485s 00:21:06.550 user 3m6.488s 00:21:06.550 sys 0m10.371s 00:21:06.550 04:17:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:06.550 04:17:08 -- common/autotest_common.sh@10 -- # set +x 00:21:06.550 ************************************ 00:21:06.550 END TEST nvmf_perf 00:21:06.550 ************************************ 00:21:06.550 04:17:08 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:06.550 04:17:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:06.550 04:17:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:06.550 04:17:08 -- common/autotest_common.sh@10 -- # set +x 00:21:06.550 ************************************ 00:21:06.550 START TEST nvmf_fio_host 00:21:06.550 ************************************ 00:21:06.550 04:17:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:06.550 * Looking for test storage... 00:21:06.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:06.550 04:17:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:06.550 04:17:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:06.550 04:17:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:06.550 04:17:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:06.550 04:17:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:06.550 04:17:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:06.550 04:17:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:06.550 04:17:08 -- scripts/common.sh@335 -- # IFS=.-: 00:21:06.550 04:17:08 -- scripts/common.sh@335 -- # read -ra ver1 00:21:06.550 04:17:08 -- scripts/common.sh@336 -- # IFS=.-: 00:21:06.550 04:17:08 -- scripts/common.sh@336 -- # read -ra ver2 00:21:06.550 04:17:08 -- scripts/common.sh@337 -- # local 'op=<' 00:21:06.550 04:17:08 -- scripts/common.sh@339 -- # ver1_l=2 00:21:06.550 04:17:08 -- scripts/common.sh@340 -- # ver2_l=1 00:21:06.550 04:17:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:06.550 04:17:08 -- scripts/common.sh@343 -- # case "$op" in 00:21:06.550 04:17:08 -- scripts/common.sh@344 -- # : 1 00:21:06.550 04:17:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:06.550 04:17:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:06.550 04:17:08 -- scripts/common.sh@364 -- # decimal 1 00:21:06.550 04:17:08 -- scripts/common.sh@352 -- # local d=1 00:21:06.550 04:17:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:06.550 04:17:08 -- scripts/common.sh@354 -- # echo 1 00:21:06.550 04:17:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:06.550 04:17:08 -- scripts/common.sh@365 -- # decimal 2 00:21:06.550 04:17:08 -- scripts/common.sh@352 -- # local d=2 00:21:06.550 04:17:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:06.550 04:17:08 -- scripts/common.sh@354 -- # echo 2 00:21:06.550 04:17:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:06.550 04:17:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:06.550 04:17:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:06.550 04:17:08 -- scripts/common.sh@367 -- # return 0 00:21:06.550 04:17:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:06.550 04:17:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:06.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.550 --rc genhtml_branch_coverage=1 00:21:06.550 --rc genhtml_function_coverage=1 00:21:06.550 --rc genhtml_legend=1 00:21:06.550 --rc geninfo_all_blocks=1 00:21:06.550 --rc geninfo_unexecuted_blocks=1 00:21:06.550 00:21:06.550 ' 00:21:06.550 04:17:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:06.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.550 --rc genhtml_branch_coverage=1 00:21:06.550 --rc genhtml_function_coverage=1 00:21:06.550 --rc genhtml_legend=1 00:21:06.550 --rc geninfo_all_blocks=1 00:21:06.550 --rc geninfo_unexecuted_blocks=1 00:21:06.550 00:21:06.550 ' 00:21:06.550 04:17:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:06.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.550 --rc genhtml_branch_coverage=1 00:21:06.550 --rc genhtml_function_coverage=1 00:21:06.550 --rc genhtml_legend=1 00:21:06.550 --rc geninfo_all_blocks=1 00:21:06.550 --rc geninfo_unexecuted_blocks=1 00:21:06.550 00:21:06.550 ' 00:21:06.550 04:17:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:06.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.550 --rc genhtml_branch_coverage=1 00:21:06.550 --rc genhtml_function_coverage=1 00:21:06.550 --rc genhtml_legend=1 00:21:06.550 --rc geninfo_all_blocks=1 00:21:06.550 --rc geninfo_unexecuted_blocks=1 00:21:06.550 00:21:06.550 ' 00:21:06.550 04:17:08 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:06.551 04:17:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.551 04:17:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.551 04:17:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.551 04:17:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.551 04:17:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.551 04:17:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.551 04:17:08 -- paths/export.sh@5 -- # export PATH 00:21:06.551 04:17:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.551 04:17:08 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:06.551 04:17:08 -- nvmf/common.sh@7 -- # uname -s 00:21:06.551 04:17:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.551 04:17:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.551 04:17:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.551 04:17:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.551 04:17:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.551 04:17:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.551 04:17:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.551 04:17:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.551 04:17:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.551 04:17:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.551 04:17:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:21:06.551 04:17:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:21:06.551 04:17:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.551 04:17:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.551 04:17:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:06.551 04:17:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:06.551 04:17:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.551 04:17:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.551 04:17:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.551 04:17:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.551 04:17:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.551 04:17:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.551 04:17:08 -- paths/export.sh@5 -- # export PATH 00:21:06.551 04:17:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.551 04:17:08 -- nvmf/common.sh@46 -- # : 0 00:21:06.551 04:17:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:06.551 04:17:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:06.551 04:17:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:06.551 04:17:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.551 04:17:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.551 04:17:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:06.551 04:17:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:06.551 04:17:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:06.551 04:17:08 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:06.551 04:17:08 -- host/fio.sh@14 -- # nvmftestinit 00:21:06.551 04:17:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:06.551 04:17:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.551 04:17:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:06.551 04:17:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:06.551 04:17:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:06.551 04:17:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.551 04:17:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.551 04:17:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.551 04:17:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:06.551 04:17:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:06.551 04:17:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:06.551 04:17:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:06.551 04:17:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:06.551 04:17:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:06.551 04:17:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:06.551 04:17:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:06.551 04:17:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:06.551 04:17:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:06.551 04:17:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:06.551 04:17:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:06.551 04:17:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:06.551 04:17:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:06.551 04:17:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:06.551 04:17:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:06.551 04:17:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:06.551 04:17:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:06.551 04:17:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:06.810 04:17:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:06.810 Cannot find device "nvmf_tgt_br" 00:21:06.810 04:17:08 -- nvmf/common.sh@154 -- # true 00:21:06.810 04:17:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:06.810 Cannot find device "nvmf_tgt_br2" 00:21:06.810 04:17:08 -- nvmf/common.sh@155 -- # true 00:21:06.810 04:17:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:06.810 04:17:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:06.810 Cannot find device "nvmf_tgt_br" 00:21:06.810 04:17:08 -- nvmf/common.sh@157 -- # true 00:21:06.810 04:17:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:06.810 Cannot find device "nvmf_tgt_br2" 00:21:06.810 04:17:08 -- nvmf/common.sh@158 -- # true 00:21:06.810 04:17:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:06.810 04:17:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:06.811 04:17:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:06.811 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:06.811 04:17:08 -- nvmf/common.sh@161 -- # true 00:21:06.811 04:17:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:06.811 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:06.811 04:17:08 -- nvmf/common.sh@162 -- # true 00:21:06.811 04:17:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:06.811 04:17:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:06.811 04:17:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:06.811 04:17:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:06.811 04:17:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:06.811 04:17:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:06.811 04:17:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:06.811 04:17:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:06.811 04:17:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:06.811 04:17:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:06.811 04:17:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:06.811 04:17:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:06.811 04:17:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:06.811 04:17:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:06.811 04:17:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:06.811 04:17:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:06.811 04:17:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:06.811 04:17:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:06.811 04:17:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:06.811 04:17:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:07.069 04:17:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:07.069 04:17:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:07.069 04:17:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:07.069 04:17:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:07.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:07.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:21:07.069 00:21:07.069 --- 10.0.0.2 ping statistics --- 00:21:07.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.069 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:21:07.069 04:17:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:07.069 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:07.069 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:21:07.069 00:21:07.069 --- 10.0.0.3 ping statistics --- 00:21:07.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.069 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:07.069 04:17:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:07.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:07.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:21:07.069 00:21:07.069 --- 10.0.0.1 ping statistics --- 00:21:07.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.069 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:21:07.069 04:17:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:07.069 04:17:08 -- nvmf/common.sh@421 -- # return 0 00:21:07.069 04:17:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:07.069 04:17:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.069 04:17:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:07.069 04:17:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:07.069 04:17:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.069 04:17:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:07.069 04:17:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:07.069 04:17:08 -- host/fio.sh@16 -- # [[ y != y ]] 00:21:07.069 04:17:08 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:07.069 04:17:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:07.069 04:17:08 -- common/autotest_common.sh@10 -- # set +x 00:21:07.069 04:17:08 -- host/fio.sh@24 -- # nvmfpid=94814 00:21:07.069 04:17:08 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:07.069 04:17:08 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:07.069 04:17:08 -- host/fio.sh@28 -- # waitforlisten 94814 00:21:07.069 04:17:08 -- common/autotest_common.sh@829 -- # '[' -z 94814 ']' 00:21:07.069 04:17:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.069 04:17:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:07.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.069 04:17:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.070 04:17:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:07.070 04:17:08 -- common/autotest_common.sh@10 -- # set +x 00:21:07.070 [2024-11-26 04:17:08.710369] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:07.070 [2024-11-26 04:17:08.710452] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.328 [2024-11-26 04:17:08.854679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:07.328 [2024-11-26 04:17:08.935579] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:07.328 [2024-11-26 04:17:08.935768] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.328 [2024-11-26 04:17:08.935786] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.328 [2024-11-26 04:17:08.935798] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.328 [2024-11-26 04:17:08.936309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.328 [2024-11-26 04:17:08.936506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.328 [2024-11-26 04:17:08.936661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:07.328 [2024-11-26 04:17:08.936678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.893 04:17:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:07.893 04:17:09 -- common/autotest_common.sh@862 -- # return 0 00:21:07.893 04:17:09 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:08.152 [2024-11-26 04:17:09.825667] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.152 04:17:09 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:08.152 04:17:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:08.152 04:17:09 -- common/autotest_common.sh@10 -- # set +x 00:21:08.411 04:17:09 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:08.411 Malloc1 00:21:08.411 04:17:10 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:08.669 04:17:10 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:08.928 04:17:10 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:09.187 [2024-11-26 04:17:10.793182] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.187 04:17:10 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:09.446 04:17:11 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:09.446 04:17:11 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:09.446 04:17:11 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:09.446 04:17:11 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:09.446 04:17:11 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:09.446 04:17:11 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:09.446 04:17:11 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:09.446 04:17:11 -- common/autotest_common.sh@1330 -- # shift 00:21:09.446 04:17:11 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:09.446 04:17:11 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:09.446 04:17:11 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:09.446 04:17:11 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:09.446 04:17:11 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:09.446 04:17:11 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:09.446 04:17:11 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:09.446 04:17:11 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:09.446 04:17:11 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:09.446 04:17:11 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:09.446 04:17:11 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:09.446 04:17:11 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:09.446 04:17:11 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:09.446 04:17:11 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:09.446 04:17:11 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:09.446 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:09.446 fio-3.35 00:21:09.446 Starting 1 thread 00:21:11.978 00:21:11.978 test: (groupid=0, jobs=1): err= 0: pid=94944: Tue Nov 26 04:17:13 2024 00:21:11.978 read: IOPS=11.4k, BW=44.5MiB/s (46.7MB/s)(89.2MiB/2005msec) 00:21:11.978 slat (nsec): min=1735, max=362156, avg=2358.30, stdev=3239.74 00:21:11.978 clat (usec): min=3333, max=11140, avg=5992.48, stdev=550.72 00:21:11.978 lat (usec): min=3384, max=11142, avg=5994.84, stdev=550.74 00:21:11.978 clat percentiles (usec): 00:21:11.978 | 1.00th=[ 4948], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:21:11.978 | 30.00th=[ 5735], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 6063], 00:21:11.978 | 70.00th=[ 6194], 80.00th=[ 6390], 90.00th=[ 6587], 95.00th=[ 6849], 00:21:11.978 | 99.00th=[ 7504], 99.50th=[ 8225], 99.90th=[10159], 99.95th=[10945], 00:21:11.978 | 99.99th=[11076] 00:21:11.978 bw ( KiB/s): min=44790, max=46408, per=99.91%, avg=45521.50, stdev=729.84, samples=4 00:21:11.978 iops : min=11197, max=11602, avg=11380.25, stdev=182.63, samples=4 00:21:11.978 write: IOPS=11.3k, BW=44.2MiB/s (46.4MB/s)(88.7MiB/2005msec); 0 zone resets 00:21:11.978 slat (nsec): min=1833, max=296928, avg=2422.09, stdev=2461.27 00:21:11.978 clat (usec): min=2551, max=9894, avg=5227.85, stdev=443.96 00:21:11.978 lat (usec): min=2564, max=9896, avg=5230.27, stdev=444.04 00:21:11.978 clat percentiles (usec): 00:21:11.978 | 1.00th=[ 4293], 5.00th=[ 4621], 10.00th=[ 4752], 20.00th=[ 4883], 00:21:11.978 | 30.00th=[ 5014], 40.00th=[ 5145], 50.00th=[ 5211], 60.00th=[ 5342], 00:21:11.978 | 70.00th=[ 5407], 80.00th=[ 5538], 90.00th=[ 5669], 95.00th=[ 5866], 00:21:11.978 | 99.00th=[ 6259], 99.50th=[ 7111], 99.90th=[ 9241], 99.95th=[ 9503], 00:21:11.978 | 99.99th=[ 9765] 00:21:11.978 bw ( KiB/s): min=45040, max=45520, per=99.93%, avg=45257.25, stdev=228.99, samples=4 00:21:11.978 iops : min=11260, max=11380, avg=11314.25, stdev=57.31, samples=4 00:21:11.978 lat (msec) : 4=0.21%, 10=99.69%, 20=0.10% 00:21:11.978 cpu : usr=64.47%, sys=25.05%, ctx=9, majf=0, minf=5 00:21:11.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:11.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:11.978 issued rwts: total=22838,22701,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.978 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:11.978 00:21:11.978 Run status group 0 (all jobs): 00:21:11.978 READ: bw=44.5MiB/s (46.7MB/s), 44.5MiB/s-44.5MiB/s (46.7MB/s-46.7MB/s), io=89.2MiB (93.5MB), run=2005-2005msec 00:21:11.978 WRITE: bw=44.2MiB/s (46.4MB/s), 44.2MiB/s-44.2MiB/s (46.4MB/s-46.4MB/s), io=88.7MiB (93.0MB), run=2005-2005msec 00:21:11.978 04:17:13 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:11.978 04:17:13 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:11.978 04:17:13 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:11.978 04:17:13 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:11.978 04:17:13 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:11.978 04:17:13 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:11.978 04:17:13 -- common/autotest_common.sh@1330 -- # shift 00:21:11.978 04:17:13 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:11.978 04:17:13 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:11.978 04:17:13 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:11.978 04:17:13 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:11.978 04:17:13 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:11.978 04:17:13 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:11.978 04:17:13 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:11.978 04:17:13 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:11.978 04:17:13 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:11.978 04:17:13 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:11.978 04:17:13 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:11.978 04:17:13 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:11.978 04:17:13 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:11.978 04:17:13 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:11.978 04:17:13 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:11.978 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:11.978 fio-3.35 00:21:11.978 Starting 1 thread 00:21:14.514 00:21:14.514 test: (groupid=0, jobs=1): err= 0: pid=94989: Tue Nov 26 04:17:16 2024 00:21:14.514 read: IOPS=9410, BW=147MiB/s (154MB/s)(295MiB/2004msec) 00:21:14.514 slat (usec): min=2, max=133, avg= 3.35, stdev= 2.30 00:21:14.514 clat (usec): min=2086, max=15708, avg=8115.91, stdev=1841.67 00:21:14.514 lat (usec): min=2089, max=15710, avg=8119.26, stdev=1841.71 00:21:14.514 clat percentiles (usec): 00:21:14.514 | 1.00th=[ 4359], 5.00th=[ 5342], 10.00th=[ 5800], 20.00th=[ 6390], 00:21:14.515 | 30.00th=[ 6980], 40.00th=[ 7504], 50.00th=[ 8029], 60.00th=[ 8586], 00:21:14.515 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10814], 00:21:14.515 | 99.00th=[12780], 99.50th=[13566], 99.90th=[15401], 99.95th=[15533], 00:21:14.515 | 99.99th=[15664] 00:21:14.515 bw ( KiB/s): min=63360, max=92998, per=49.37%, avg=74337.50, stdev=13154.54, samples=4 00:21:14.515 iops : min= 3960, max= 5812, avg=4646.00, stdev=821.98, samples=4 00:21:14.515 write: IOPS=5694, BW=89.0MiB/s (93.3MB/s)(152MiB/1711msec); 0 zone resets 00:21:14.515 slat (usec): min=29, max=360, avg=33.06, stdev= 8.65 00:21:14.515 clat (usec): min=2163, max=15260, avg=9732.62, stdev=1476.24 00:21:14.515 lat (usec): min=2193, max=15289, avg=9765.68, stdev=1476.50 00:21:14.515 clat percentiles (usec): 00:21:14.515 | 1.00th=[ 6652], 5.00th=[ 7570], 10.00th=[ 8029], 20.00th=[ 8455], 00:21:14.515 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10028], 00:21:14.515 | 70.00th=[10421], 80.00th=[10945], 90.00th=[11731], 95.00th=[12256], 00:21:14.515 | 99.00th=[13435], 99.50th=[14353], 99.90th=[14877], 99.95th=[15139], 00:21:14.515 | 99.99th=[15270] 00:21:14.515 bw ( KiB/s): min=65408, max=96351, per=84.86%, avg=77319.75, stdev=13614.18, samples=4 00:21:14.515 iops : min= 4088, max= 6021, avg=4832.25, stdev=850.45, samples=4 00:21:14.515 lat (msec) : 4=0.49%, 10=74.58%, 20=24.93% 00:21:14.515 cpu : usr=69.15%, sys=20.12%, ctx=5, majf=0, minf=2 00:21:14.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:21:14.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:14.515 issued rwts: total=18859,9743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:14.515 00:21:14.515 Run status group 0 (all jobs): 00:21:14.515 READ: bw=147MiB/s (154MB/s), 147MiB/s-147MiB/s (154MB/s-154MB/s), io=295MiB (309MB), run=2004-2004msec 00:21:14.515 WRITE: bw=89.0MiB/s (93.3MB/s), 89.0MiB/s-89.0MiB/s (93.3MB/s-93.3MB/s), io=152MiB (160MB), run=1711-1711msec 00:21:14.515 04:17:16 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:14.774 04:17:16 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:14.774 04:17:16 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:14.774 04:17:16 -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:14.774 04:17:16 -- common/autotest_common.sh@1508 -- # bdfs=() 00:21:14.774 04:17:16 -- common/autotest_common.sh@1508 -- # local bdfs 00:21:14.774 04:17:16 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:14.774 04:17:16 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:14.774 04:17:16 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:21:14.774 04:17:16 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:21:14.774 04:17:16 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:21:14.774 04:17:16 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:21:15.033 Nvme0n1 00:21:15.033 04:17:16 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:15.292 04:17:16 -- host/fio.sh@53 -- # ls_guid=90c4989e-fcc3-45c4-b13e-7289a19d1148 00:21:15.292 04:17:16 -- host/fio.sh@54 -- # get_lvs_free_mb 90c4989e-fcc3-45c4-b13e-7289a19d1148 00:21:15.292 04:17:16 -- common/autotest_common.sh@1353 -- # local lvs_uuid=90c4989e-fcc3-45c4-b13e-7289a19d1148 00:21:15.292 04:17:16 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:15.292 04:17:16 -- common/autotest_common.sh@1355 -- # local fc 00:21:15.292 04:17:16 -- common/autotest_common.sh@1356 -- # local cs 00:21:15.292 04:17:16 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:15.551 04:17:17 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:15.551 { 00:21:15.551 "base_bdev": "Nvme0n1", 00:21:15.551 "block_size": 4096, 00:21:15.551 "cluster_size": 1073741824, 00:21:15.551 "free_clusters": 4, 00:21:15.551 "name": "lvs_0", 00:21:15.551 "total_data_clusters": 4, 00:21:15.551 "uuid": "90c4989e-fcc3-45c4-b13e-7289a19d1148" 00:21:15.551 } 00:21:15.551 ]' 00:21:15.551 04:17:17 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="90c4989e-fcc3-45c4-b13e-7289a19d1148") .free_clusters' 00:21:15.551 04:17:17 -- common/autotest_common.sh@1358 -- # fc=4 00:21:15.551 04:17:17 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="90c4989e-fcc3-45c4-b13e-7289a19d1148") .cluster_size' 00:21:15.551 04:17:17 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:21:15.551 04:17:17 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:21:15.551 04:17:17 -- common/autotest_common.sh@1363 -- # echo 4096 00:21:15.551 4096 00:21:15.551 04:17:17 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:15.810 68dab097-b270-44e7-9c57-86291d146e5c 00:21:15.810 04:17:17 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:16.068 04:17:17 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:16.327 04:17:18 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:16.586 04:17:18 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:16.586 04:17:18 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:16.586 04:17:18 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:16.586 04:17:18 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:16.586 04:17:18 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:16.586 04:17:18 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:16.586 04:17:18 -- common/autotest_common.sh@1330 -- # shift 00:21:16.586 04:17:18 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:16.586 04:17:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:16.586 04:17:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:16.586 04:17:18 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:16.586 04:17:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:16.586 04:17:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:16.586 04:17:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:16.586 04:17:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:16.586 04:17:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:16.586 04:17:18 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:16.586 04:17:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:16.586 04:17:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:16.586 04:17:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:16.586 04:17:18 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:16.586 04:17:18 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:16.845 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:16.845 fio-3.35 00:21:16.845 Starting 1 thread 00:21:19.379 00:21:19.379 test: (groupid=0, jobs=1): err= 0: pid=95146: Tue Nov 26 04:17:20 2024 00:21:19.379 read: IOPS=6732, BW=26.3MiB/s (27.6MB/s)(52.8MiB/2008msec) 00:21:19.379 slat (nsec): min=1690, max=430291, avg=2781.99, stdev=5121.43 00:21:19.379 clat (usec): min=3943, max=18096, avg=10206.90, stdev=1027.46 00:21:19.379 lat (usec): min=3953, max=18099, avg=10209.68, stdev=1027.20 00:21:19.379 clat percentiles (usec): 00:21:19.379 | 1.00th=[ 7963], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9372], 00:21:19.379 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10421], 00:21:19.379 | 70.00th=[10683], 80.00th=[11076], 90.00th=[11469], 95.00th=[11863], 00:21:19.379 | 99.00th=[12649], 99.50th=[13042], 99.90th=[14222], 99.95th=[14615], 00:21:19.379 | 99.99th=[16188] 00:21:19.379 bw ( KiB/s): min=26520, max=27354, per=99.83%, avg=26884.50, stdev=363.22, samples=4 00:21:19.379 iops : min= 6630, max= 6838, avg=6721.00, stdev=90.59, samples=4 00:21:19.379 write: IOPS=6737, BW=26.3MiB/s (27.6MB/s)(52.8MiB/2008msec); 0 zone resets 00:21:19.380 slat (nsec): min=1793, max=274906, avg=2956.08, stdev=3987.24 00:21:19.380 clat (usec): min=2612, max=15481, avg=8740.90, stdev=868.96 00:21:19.380 lat (usec): min=2626, max=15484, avg=8743.86, stdev=868.83 00:21:19.380 clat percentiles (usec): 00:21:19.380 | 1.00th=[ 6849], 5.00th=[ 7439], 10.00th=[ 7701], 20.00th=[ 8029], 00:21:19.380 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8979], 00:21:19.380 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9765], 95.00th=[10159], 00:21:19.380 | 99.00th=[10683], 99.50th=[10945], 99.90th=[14091], 99.95th=[14353], 00:21:19.380 | 99.99th=[15401] 00:21:19.380 bw ( KiB/s): min=26533, max=27656, per=99.82%, avg=26903.25, stdev=515.91, samples=4 00:21:19.380 iops : min= 6633, max= 6914, avg=6725.75, stdev=129.04, samples=4 00:21:19.380 lat (msec) : 4=0.04%, 10=68.21%, 20=31.75% 00:21:19.380 cpu : usr=66.57%, sys=25.31%, ctx=2, majf=0, minf=5 00:21:19.380 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:19.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:19.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:19.380 issued rwts: total=13519,13529,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:19.380 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:19.380 00:21:19.380 Run status group 0 (all jobs): 00:21:19.380 READ: bw=26.3MiB/s (27.6MB/s), 26.3MiB/s-26.3MiB/s (27.6MB/s-27.6MB/s), io=52.8MiB (55.4MB), run=2008-2008msec 00:21:19.380 WRITE: bw=26.3MiB/s (27.6MB/s), 26.3MiB/s-26.3MiB/s (27.6MB/s-27.6MB/s), io=52.8MiB (55.4MB), run=2008-2008msec 00:21:19.380 04:17:20 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:19.380 04:17:20 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:19.638 04:17:21 -- host/fio.sh@64 -- # ls_nested_guid=4b6c1d8e-e41f-446d-98d8-d315af86151e 00:21:19.638 04:17:21 -- host/fio.sh@65 -- # get_lvs_free_mb 4b6c1d8e-e41f-446d-98d8-d315af86151e 00:21:19.638 04:17:21 -- common/autotest_common.sh@1353 -- # local lvs_uuid=4b6c1d8e-e41f-446d-98d8-d315af86151e 00:21:19.639 04:17:21 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:19.639 04:17:21 -- common/autotest_common.sh@1355 -- # local fc 00:21:19.639 04:17:21 -- common/autotest_common.sh@1356 -- # local cs 00:21:19.639 04:17:21 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:19.897 04:17:21 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:19.897 { 00:21:19.897 "base_bdev": "Nvme0n1", 00:21:19.897 "block_size": 4096, 00:21:19.897 "cluster_size": 1073741824, 00:21:19.897 "free_clusters": 0, 00:21:19.897 "name": "lvs_0", 00:21:19.897 "total_data_clusters": 4, 00:21:19.897 "uuid": "90c4989e-fcc3-45c4-b13e-7289a19d1148" 00:21:19.897 }, 00:21:19.897 { 00:21:19.897 "base_bdev": "68dab097-b270-44e7-9c57-86291d146e5c", 00:21:19.897 "block_size": 4096, 00:21:19.897 "cluster_size": 4194304, 00:21:19.897 "free_clusters": 1022, 00:21:19.897 "name": "lvs_n_0", 00:21:19.897 "total_data_clusters": 1022, 00:21:19.897 "uuid": "4b6c1d8e-e41f-446d-98d8-d315af86151e" 00:21:19.897 } 00:21:19.897 ]' 00:21:19.897 04:17:21 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="4b6c1d8e-e41f-446d-98d8-d315af86151e") .free_clusters' 00:21:19.897 04:17:21 -- common/autotest_common.sh@1358 -- # fc=1022 00:21:19.897 04:17:21 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="4b6c1d8e-e41f-446d-98d8-d315af86151e") .cluster_size' 00:21:20.156 4088 00:21:20.156 04:17:21 -- common/autotest_common.sh@1359 -- # cs=4194304 00:21:20.156 04:17:21 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:21:20.156 04:17:21 -- common/autotest_common.sh@1363 -- # echo 4088 00:21:20.156 04:17:21 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:20.156 8aa6320a-f8bc-464e-a4d1-12501be18ec8 00:21:20.156 04:17:21 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:20.415 04:17:22 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:20.672 04:17:22 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:20.931 04:17:22 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:20.931 04:17:22 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:20.931 04:17:22 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:20.931 04:17:22 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:20.931 04:17:22 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:20.931 04:17:22 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:20.931 04:17:22 -- common/autotest_common.sh@1330 -- # shift 00:21:20.931 04:17:22 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:20.931 04:17:22 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:20.931 04:17:22 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:20.931 04:17:22 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:20.931 04:17:22 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:20.931 04:17:22 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:20.931 04:17:22 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:20.931 04:17:22 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:20.931 04:17:22 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:20.931 04:17:22 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:20.931 04:17:22 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:20.931 04:17:22 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:20.931 04:17:22 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:20.931 04:17:22 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:20.931 04:17:22 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:20.931 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:20.931 fio-3.35 00:21:20.931 Starting 1 thread 00:21:23.468 00:21:23.468 test: (groupid=0, jobs=1): err= 0: pid=95262: Tue Nov 26 04:17:24 2024 00:21:23.468 read: IOPS=5720, BW=22.3MiB/s (23.4MB/s)(44.9MiB/2009msec) 00:21:23.468 slat (nsec): min=1761, max=347632, avg=3005.21, stdev=5282.28 00:21:23.468 clat (usec): min=4724, max=21911, avg=11914.52, stdev=1149.50 00:21:23.468 lat (usec): min=4734, max=21913, avg=11917.53, stdev=1149.26 00:21:23.468 clat percentiles (usec): 00:21:23.468 | 1.00th=[ 9503], 5.00th=[10159], 10.00th=[10552], 20.00th=[11076], 00:21:23.468 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:21:23.468 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13304], 95.00th=[13698], 00:21:23.468 | 99.00th=[14615], 99.50th=[15008], 99.90th=[19530], 99.95th=[19530], 00:21:23.468 | 99.99th=[21890] 00:21:23.468 bw ( KiB/s): min=21624, max=23616, per=99.95%, avg=22870.00, stdev=863.34, samples=4 00:21:23.468 iops : min= 5406, max= 5904, avg=5717.50, stdev=215.84, samples=4 00:21:23.468 write: IOPS=5709, BW=22.3MiB/s (23.4MB/s)(44.8MiB/2009msec); 0 zone resets 00:21:23.468 slat (nsec): min=1823, max=277942, avg=3062.24, stdev=4015.74 00:21:23.468 clat (usec): min=2630, max=19902, avg=10403.60, stdev=1013.02 00:21:23.468 lat (usec): min=2643, max=19904, avg=10406.66, stdev=1012.95 00:21:23.468 clat percentiles (usec): 00:21:23.468 | 1.00th=[ 8225], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9634], 00:21:23.468 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:21:23.468 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:21:23.468 | 99.00th=[12518], 99.50th=[12911], 99.90th=[18482], 99.95th=[19530], 00:21:23.468 | 99.99th=[19792] 00:21:23.468 bw ( KiB/s): min=22656, max=23048, per=99.84%, avg=22802.00, stdev=184.68, samples=4 00:21:23.468 iops : min= 5664, max= 5762, avg=5700.50, stdev=46.17, samples=4 00:21:23.468 lat (msec) : 4=0.03%, 10=18.35%, 20=81.60%, 50=0.02% 00:21:23.468 cpu : usr=73.90%, sys=19.42%, ctx=7, majf=0, minf=5 00:21:23.468 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:23.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:23.468 issued rwts: total=11492,11471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.468 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:23.468 00:21:23.468 Run status group 0 (all jobs): 00:21:23.468 READ: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=44.9MiB (47.1MB), run=2009-2009msec 00:21:23.468 WRITE: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=44.8MiB (47.0MB), run=2009-2009msec 00:21:23.468 04:17:24 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:23.728 04:17:25 -- host/fio.sh@74 -- # sync 00:21:23.728 04:17:25 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:23.987 04:17:25 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:24.246 04:17:25 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:24.504 04:17:26 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:24.764 04:17:26 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:25.701 04:17:27 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:25.701 04:17:27 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:25.701 04:17:27 -- host/fio.sh@86 -- # nvmftestfini 00:21:25.701 04:17:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:25.701 04:17:27 -- nvmf/common.sh@116 -- # sync 00:21:25.701 04:17:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:25.701 04:17:27 -- nvmf/common.sh@119 -- # set +e 00:21:25.701 04:17:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:25.701 04:17:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:25.701 rmmod nvme_tcp 00:21:25.701 rmmod nvme_fabrics 00:21:25.701 rmmod nvme_keyring 00:21:25.701 04:17:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:25.701 04:17:27 -- nvmf/common.sh@123 -- # set -e 00:21:25.701 04:17:27 -- nvmf/common.sh@124 -- # return 0 00:21:25.701 04:17:27 -- nvmf/common.sh@477 -- # '[' -n 94814 ']' 00:21:25.701 04:17:27 -- nvmf/common.sh@478 -- # killprocess 94814 00:21:25.701 04:17:27 -- common/autotest_common.sh@936 -- # '[' -z 94814 ']' 00:21:25.701 04:17:27 -- common/autotest_common.sh@940 -- # kill -0 94814 00:21:25.701 04:17:27 -- common/autotest_common.sh@941 -- # uname 00:21:25.701 04:17:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:25.701 04:17:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94814 00:21:25.701 killing process with pid 94814 00:21:25.701 04:17:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:25.701 04:17:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:25.701 04:17:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94814' 00:21:25.701 04:17:27 -- common/autotest_common.sh@955 -- # kill 94814 00:21:25.701 04:17:27 -- common/autotest_common.sh@960 -- # wait 94814 00:21:25.960 04:17:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:25.960 04:17:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:25.960 04:17:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:25.960 04:17:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:25.960 04:17:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:25.960 04:17:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.960 04:17:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.960 04:17:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.960 04:17:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:25.960 00:21:25.960 real 0m19.518s 00:21:25.960 user 1m24.600s 00:21:25.960 sys 0m4.481s 00:21:25.960 04:17:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:25.960 04:17:27 -- common/autotest_common.sh@10 -- # set +x 00:21:25.960 ************************************ 00:21:25.960 END TEST nvmf_fio_host 00:21:25.960 ************************************ 00:21:25.960 04:17:27 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:25.960 04:17:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:25.960 04:17:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:25.960 04:17:27 -- common/autotest_common.sh@10 -- # set +x 00:21:25.960 ************************************ 00:21:25.960 START TEST nvmf_failover 00:21:25.960 ************************************ 00:21:25.960 04:17:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:26.219 * Looking for test storage... 00:21:26.219 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:26.219 04:17:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:26.219 04:17:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:26.219 04:17:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:26.219 04:17:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:26.219 04:17:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:26.219 04:17:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:26.219 04:17:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:26.219 04:17:27 -- scripts/common.sh@335 -- # IFS=.-: 00:21:26.219 04:17:27 -- scripts/common.sh@335 -- # read -ra ver1 00:21:26.219 04:17:27 -- scripts/common.sh@336 -- # IFS=.-: 00:21:26.219 04:17:27 -- scripts/common.sh@336 -- # read -ra ver2 00:21:26.219 04:17:27 -- scripts/common.sh@337 -- # local 'op=<' 00:21:26.219 04:17:27 -- scripts/common.sh@339 -- # ver1_l=2 00:21:26.219 04:17:27 -- scripts/common.sh@340 -- # ver2_l=1 00:21:26.219 04:17:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:26.219 04:17:27 -- scripts/common.sh@343 -- # case "$op" in 00:21:26.219 04:17:27 -- scripts/common.sh@344 -- # : 1 00:21:26.219 04:17:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:26.219 04:17:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:26.219 04:17:27 -- scripts/common.sh@364 -- # decimal 1 00:21:26.219 04:17:27 -- scripts/common.sh@352 -- # local d=1 00:21:26.219 04:17:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:26.219 04:17:27 -- scripts/common.sh@354 -- # echo 1 00:21:26.219 04:17:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:26.219 04:17:27 -- scripts/common.sh@365 -- # decimal 2 00:21:26.219 04:17:27 -- scripts/common.sh@352 -- # local d=2 00:21:26.219 04:17:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:26.219 04:17:27 -- scripts/common.sh@354 -- # echo 2 00:21:26.219 04:17:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:26.220 04:17:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:26.220 04:17:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:26.220 04:17:27 -- scripts/common.sh@367 -- # return 0 00:21:26.220 04:17:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:26.220 04:17:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:26.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.220 --rc genhtml_branch_coverage=1 00:21:26.220 --rc genhtml_function_coverage=1 00:21:26.220 --rc genhtml_legend=1 00:21:26.220 --rc geninfo_all_blocks=1 00:21:26.220 --rc geninfo_unexecuted_blocks=1 00:21:26.220 00:21:26.220 ' 00:21:26.220 04:17:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:26.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.220 --rc genhtml_branch_coverage=1 00:21:26.220 --rc genhtml_function_coverage=1 00:21:26.220 --rc genhtml_legend=1 00:21:26.220 --rc geninfo_all_blocks=1 00:21:26.220 --rc geninfo_unexecuted_blocks=1 00:21:26.220 00:21:26.220 ' 00:21:26.220 04:17:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:26.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.220 --rc genhtml_branch_coverage=1 00:21:26.220 --rc genhtml_function_coverage=1 00:21:26.220 --rc genhtml_legend=1 00:21:26.220 --rc geninfo_all_blocks=1 00:21:26.220 --rc geninfo_unexecuted_blocks=1 00:21:26.220 00:21:26.220 ' 00:21:26.220 04:17:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:26.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.220 --rc genhtml_branch_coverage=1 00:21:26.220 --rc genhtml_function_coverage=1 00:21:26.220 --rc genhtml_legend=1 00:21:26.220 --rc geninfo_all_blocks=1 00:21:26.220 --rc geninfo_unexecuted_blocks=1 00:21:26.220 00:21:26.220 ' 00:21:26.220 04:17:27 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:26.220 04:17:27 -- nvmf/common.sh@7 -- # uname -s 00:21:26.220 04:17:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.220 04:17:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.220 04:17:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.220 04:17:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.220 04:17:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.220 04:17:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.220 04:17:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.220 04:17:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.220 04:17:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.220 04:17:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.220 04:17:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:21:26.220 04:17:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:21:26.220 04:17:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.220 04:17:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.220 04:17:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:26.220 04:17:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:26.220 04:17:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.220 04:17:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.220 04:17:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.220 04:17:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.220 04:17:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.220 04:17:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.220 04:17:27 -- paths/export.sh@5 -- # export PATH 00:21:26.220 04:17:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.220 04:17:27 -- nvmf/common.sh@46 -- # : 0 00:21:26.220 04:17:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:26.220 04:17:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:26.220 04:17:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:26.220 04:17:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.220 04:17:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.220 04:17:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:26.220 04:17:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:26.220 04:17:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:26.220 04:17:27 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:26.220 04:17:27 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:26.220 04:17:27 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:26.220 04:17:27 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:26.220 04:17:27 -- host/failover.sh@18 -- # nvmftestinit 00:21:26.220 04:17:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:26.220 04:17:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:26.220 04:17:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:26.220 04:17:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:26.220 04:17:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:26.220 04:17:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.220 04:17:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:26.220 04:17:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.220 04:17:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:26.220 04:17:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:26.220 04:17:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:26.220 04:17:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:26.220 04:17:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:26.220 04:17:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:26.220 04:17:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:26.220 04:17:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:26.220 04:17:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:26.220 04:17:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:26.220 04:17:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:26.220 04:17:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:26.220 04:17:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:26.220 04:17:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:26.220 04:17:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:26.220 04:17:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:26.220 04:17:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:26.220 04:17:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:26.220 04:17:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:26.220 04:17:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:26.220 Cannot find device "nvmf_tgt_br" 00:21:26.220 04:17:27 -- nvmf/common.sh@154 -- # true 00:21:26.220 04:17:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:26.220 Cannot find device "nvmf_tgt_br2" 00:21:26.220 04:17:27 -- nvmf/common.sh@155 -- # true 00:21:26.220 04:17:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:26.220 04:17:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:26.220 Cannot find device "nvmf_tgt_br" 00:21:26.220 04:17:27 -- nvmf/common.sh@157 -- # true 00:21:26.220 04:17:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:26.220 Cannot find device "nvmf_tgt_br2" 00:21:26.220 04:17:27 -- nvmf/common.sh@158 -- # true 00:21:26.220 04:17:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:26.479 04:17:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:26.479 04:17:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:26.479 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:26.479 04:17:28 -- nvmf/common.sh@161 -- # true 00:21:26.479 04:17:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:26.479 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:26.479 04:17:28 -- nvmf/common.sh@162 -- # true 00:21:26.479 04:17:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:26.479 04:17:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:26.479 04:17:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:26.479 04:17:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:26.479 04:17:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:26.479 04:17:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:26.479 04:17:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:26.479 04:17:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:26.479 04:17:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:26.479 04:17:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:26.479 04:17:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:26.479 04:17:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:26.479 04:17:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:26.479 04:17:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:26.479 04:17:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:26.479 04:17:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:26.479 04:17:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:26.479 04:17:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:26.479 04:17:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:26.479 04:17:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:26.479 04:17:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:26.479 04:17:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:26.479 04:17:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:26.479 04:17:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:26.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:26.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:21:26.479 00:21:26.479 --- 10.0.0.2 ping statistics --- 00:21:26.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.479 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:21:26.479 04:17:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:26.479 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:26.479 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:21:26.479 00:21:26.479 --- 10.0.0.3 ping statistics --- 00:21:26.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.479 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:21:26.479 04:17:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:26.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:26.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:21:26.480 00:21:26.480 --- 10.0.0.1 ping statistics --- 00:21:26.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.480 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:21:26.480 04:17:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:26.480 04:17:28 -- nvmf/common.sh@421 -- # return 0 00:21:26.480 04:17:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:26.480 04:17:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:26.480 04:17:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:26.480 04:17:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:26.480 04:17:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:26.480 04:17:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:26.480 04:17:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:26.480 04:17:28 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:26.480 04:17:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:26.480 04:17:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:26.480 04:17:28 -- common/autotest_common.sh@10 -- # set +x 00:21:26.480 04:17:28 -- nvmf/common.sh@469 -- # nvmfpid=95547 00:21:26.480 04:17:28 -- nvmf/common.sh@470 -- # waitforlisten 95547 00:21:26.480 04:17:28 -- common/autotest_common.sh@829 -- # '[' -z 95547 ']' 00:21:26.480 04:17:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:26.480 04:17:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.480 04:17:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:26.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.480 04:17:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.480 04:17:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:26.738 04:17:28 -- common/autotest_common.sh@10 -- # set +x 00:21:26.738 [2024-11-26 04:17:28.298602] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:26.738 [2024-11-26 04:17:28.298696] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.738 [2024-11-26 04:17:28.441228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:26.998 [2024-11-26 04:17:28.513380] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:26.998 [2024-11-26 04:17:28.513564] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.998 [2024-11-26 04:17:28.513582] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.998 [2024-11-26 04:17:28.513594] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.998 [2024-11-26 04:17:28.513750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.998 [2024-11-26 04:17:28.514343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:26.998 [2024-11-26 04:17:28.514403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.572 04:17:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.572 04:17:29 -- common/autotest_common.sh@862 -- # return 0 00:21:27.572 04:17:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:27.572 04:17:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:27.572 04:17:29 -- common/autotest_common.sh@10 -- # set +x 00:21:27.833 04:17:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.833 04:17:29 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:28.091 [2024-11-26 04:17:29.644282] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.091 04:17:29 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:28.350 Malloc0 00:21:28.350 04:17:29 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:28.609 04:17:30 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:28.609 04:17:30 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:28.867 [2024-11-26 04:17:30.518618] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.867 04:17:30 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:29.126 [2024-11-26 04:17:30.722771] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:29.127 04:17:30 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:29.386 [2024-11-26 04:17:30.923047] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:29.386 04:17:30 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:29.386 04:17:30 -- host/failover.sh@31 -- # bdevperf_pid=95659 00:21:29.386 04:17:30 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:29.386 04:17:30 -- host/failover.sh@34 -- # waitforlisten 95659 /var/tmp/bdevperf.sock 00:21:29.386 04:17:30 -- common/autotest_common.sh@829 -- # '[' -z 95659 ']' 00:21:29.386 04:17:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:29.386 04:17:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:29.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:29.386 04:17:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:29.386 04:17:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:29.386 04:17:30 -- common/autotest_common.sh@10 -- # set +x 00:21:30.348 04:17:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.348 04:17:31 -- common/autotest_common.sh@862 -- # return 0 00:21:30.348 04:17:31 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:30.607 NVMe0n1 00:21:30.607 04:17:32 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:30.866 00:21:30.866 04:17:32 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:30.866 04:17:32 -- host/failover.sh@39 -- # run_test_pid=95707 00:21:30.866 04:17:32 -- host/failover.sh@41 -- # sleep 1 00:21:32.244 04:17:33 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:32.244 [2024-11-26 04:17:33.828632] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828738] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828773] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828783] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828792] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828802] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828810] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828820] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828828] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828838] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828848] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828857] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828866] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828875] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828884] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828892] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828901] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828910] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828919] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828927] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828945] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828953] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828961] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828985] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.828995] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829005] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829014] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829036] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829046] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829056] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829065] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829089] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829114] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829124] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829139] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829147] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829157] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829165] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829173] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829182] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829190] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829198] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829206] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829215] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829225] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829236] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829244] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829261] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829270] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829278] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829286] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829294] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829303] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829312] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829320] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829328] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829337] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829346] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.244 [2024-11-26 04:17:33.829354] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.245 [2024-11-26 04:17:33.829363] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.245 [2024-11-26 04:17:33.829371] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.245 [2024-11-26 04:17:33.829380] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662c90 is same with the state(5) to be set 00:21:32.245 04:17:33 -- host/failover.sh@45 -- # sleep 3 00:21:35.531 04:17:36 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:35.531 00:21:35.531 04:17:37 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:35.790 [2024-11-26 04:17:37.377224] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377288] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377315] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377323] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377330] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377341] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377349] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377357] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377365] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377373] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377380] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377387] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377395] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377402] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377409] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377417] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377424] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377431] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377439] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377447] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377454] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377461] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377468] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377477] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377485] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377500] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 [2024-11-26 04:17:37.377508] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664380 is same with the state(5) to be set 00:21:35.790 04:17:37 -- host/failover.sh@50 -- # sleep 3 00:21:39.074 04:17:40 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:39.074 [2024-11-26 04:17:40.659103] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.074 04:17:40 -- host/failover.sh@55 -- # sleep 1 00:21:40.008 04:17:41 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:40.268 [2024-11-26 04:17:41.941789] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.941863] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.941874] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.941883] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.941891] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.941899] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.941907] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.941914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.941923] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.941930] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.941937] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.941944] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.941952] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.941959] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.941966] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.941974] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.941982] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.941989] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942020] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942029] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942037] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942044] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942052] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942059] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942066] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942074] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942082] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942089] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942097] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942119] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942126] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942133] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942140] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942147] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942156] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942164] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942171] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942179] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942202] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942227] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942234] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942249] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 [2024-11-26 04:17:41.942267] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x664a60 is same with the state(5) to be set 00:21:40.268 04:17:41 -- host/failover.sh@59 -- # wait 95707 00:21:46.846 0 00:21:46.846 04:17:47 -- host/failover.sh@61 -- # killprocess 95659 00:21:46.846 04:17:47 -- common/autotest_common.sh@936 -- # '[' -z 95659 ']' 00:21:46.846 04:17:47 -- common/autotest_common.sh@940 -- # kill -0 95659 00:21:46.846 04:17:47 -- common/autotest_common.sh@941 -- # uname 00:21:46.846 04:17:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:46.846 04:17:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95659 00:21:46.846 killing process with pid 95659 00:21:46.846 04:17:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:46.846 04:17:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:46.846 04:17:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95659' 00:21:46.846 04:17:47 -- common/autotest_common.sh@955 -- # kill 95659 00:21:46.846 04:17:47 -- common/autotest_common.sh@960 -- # wait 95659 00:21:46.846 04:17:47 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:46.846 [2024-11-26 04:17:30.978197] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:46.846 [2024-11-26 04:17:30.978291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95659 ] 00:21:46.846 [2024-11-26 04:17:31.112481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.846 [2024-11-26 04:17:31.197795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.846 Running I/O for 15 seconds... 00:21:46.846 [2024-11-26 04:17:33.829648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.846 [2024-11-26 04:17:33.829736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.846 [2024-11-26 04:17:33.829763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.846 [2024-11-26 04:17:33.829778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.846 [2024-11-26 04:17:33.829793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.846 [2024-11-26 04:17:33.829807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.846 [2024-11-26 04:17:33.829820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.829833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.829847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.829859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.829873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.829885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.829898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.829910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.829923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.829936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.829949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.829961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.829974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.829987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.830976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.847 [2024-11-26 04:17:33.830987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.847 [2024-11-26 04:17:33.831000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.848 [2024-11-26 04:17:33.831012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.848 [2024-11-26 04:17:33.831037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.848 [2024-11-26 04:17:33.831062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.848 [2024-11-26 04:17:33.831086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.848 [2024-11-26 04:17:33.831111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.848 [2024-11-26 04:17:33.831141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.848 [2024-11-26 04:17:33.831173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.848 [2024-11-26 04:17:33.831198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.848 [2024-11-26 04:17:33.831223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.848 [2024-11-26 04:17:33.831248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.848 [2024-11-26 04:17:33.831273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.848 [2024-11-26 04:17:33.831298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.848 [2024-11-26 04:17:33.831322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.848 [2024-11-26 04:17:33.831352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.848 [2024-11-26 04:17:33.831381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.848 [2024-11-26 04:17:33.831406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.848 [2024-11-26 04:17:33.831431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.848 [2024-11-26 04:17:33.831455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.848 [2024-11-26 04:17:33.831487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.848 [2024-11-26 04:17:33.831512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.848 [2024-11-26 04:17:33.831538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.848 [2024-11-26 04:17:33.831568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.848 [2024-11-26 04:17:33.831594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.848 [2024-11-26 04:17:33.831619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.848 [2024-11-26 04:17:33.831643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.848 [2024-11-26 04:17:33.831669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.848 [2024-11-26 04:17:33.831694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.848 [2024-11-26 04:17:33.831738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.848 [2024-11-26 04:17:33.831764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.848 [2024-11-26 04:17:33.831795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.848 [2024-11-26 04:17:33.831820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.848 [2024-11-26 04:17:33.831852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.848 [2024-11-26 04:17:33.831878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.848 [2024-11-26 04:17:33.831903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.848 [2024-11-26 04:17:33.831929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.848 [2024-11-26 04:17:33.831953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.848 [2024-11-26 04:17:33.831978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.831992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.848 [2024-11-26 04:17:33.832010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.832023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.848 [2024-11-26 04:17:33.832036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.832050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.848 [2024-11-26 04:17:33.832062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.848 [2024-11-26 04:17:33.832086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.849 [2024-11-26 04:17:33.832265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.849 [2024-11-26 04:17:33.832315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.849 [2024-11-26 04:17:33.832366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.849 [2024-11-26 04:17:33.832719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.849 [2024-11-26 04:17:33.832797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.849 [2024-11-26 04:17:33.832821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.849 [2024-11-26 04:17:33.832935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.832984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.832997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.849 [2024-11-26 04:17:33.833009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.833021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.833033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.833054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.849 [2024-11-26 04:17:33.833066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.833083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.833096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.833121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.833143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.849 [2024-11-26 04:17:33.833155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.849 [2024-11-26 04:17:33.833167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:33.833180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:33.833192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:33.833204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:33.833216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:33.833236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:33.833248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:33.833261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:33.833273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:33.833286] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb3130 is same with the state(5) to be set 00:21:46.850 [2024-11-26 04:17:33.833300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.850 [2024-11-26 04:17:33.833309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.850 [2024-11-26 04:17:33.833323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1592 len:8 PRP1 0x0 PRP2 0x0 00:21:46.850 [2024-11-26 04:17:33.833334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:33.833393] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cb3130 was disconnected and freed. reset controller. 00:21:46.850 [2024-11-26 04:17:33.833408] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:46.850 [2024-11-26 04:17:33.833458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.850 [2024-11-26 04:17:33.833477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:33.833490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.850 [2024-11-26 04:17:33.833501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:33.833513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.850 [2024-11-26 04:17:33.833524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:33.833536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.850 [2024-11-26 04:17:33.833548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:33.833561] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.850 [2024-11-26 04:17:33.835666] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.850 [2024-11-26 04:17:33.835699] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ecb0 (9): Bad file descriptor 00:21:46.850 [2024-11-26 04:17:33.852453] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:46.850 [2024-11-26 04:17:37.377495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.850 [2024-11-26 04:17:37.377528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.377544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.850 [2024-11-26 04:17:37.377556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.377589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.850 [2024-11-26 04:17:37.377613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.377624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.850 [2024-11-26 04:17:37.377644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.377655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ecb0 is same with the state(5) to be set 00:21:46.850 [2024-11-26 04:17:37.377704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:37.377740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.377759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:55552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:37.377773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.377786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:37.377815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.377829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:54952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:37.377841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.377855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:37.377867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.377881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:37.377894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.377907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:37.377919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.377933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:37.377945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.377959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:37.377972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.377985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:37.378024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.378058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:37.378072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.378086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:37.378105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.378120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:55592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:37.378133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.378147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:55600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:37.378160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.378174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:37.378186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.378200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:37.378213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.378227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:55632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:37.378256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.378269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:37.378282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.378295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:37.378308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.378321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:37.378334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.378355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:37.378368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.850 [2024-11-26 04:17:37.378382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.850 [2024-11-26 04:17:37.378395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.378422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.851 [2024-11-26 04:17:37.378452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.378466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.851 [2024-11-26 04:17:37.378478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.378491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.851 [2024-11-26 04:17:37.378502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.378516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:55168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.851 [2024-11-26 04:17:37.378528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.378541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.851 [2024-11-26 04:17:37.378553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.378565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.851 [2024-11-26 04:17:37.378577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.378594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.851 [2024-11-26 04:17:37.378607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.378619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.851 [2024-11-26 04:17:37.378631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.378644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.851 [2024-11-26 04:17:37.378656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.378669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:55768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.851 [2024-11-26 04:17:37.378682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.378694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.851 [2024-11-26 04:17:37.378706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.378719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.851 [2024-11-26 04:17:37.378731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.378744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.851 [2024-11-26 04:17:37.378766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.378782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.851 [2024-11-26 04:17:37.378801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.378814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.851 [2024-11-26 04:17:37.378826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.378839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.851 [2024-11-26 04:17:37.378851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.378864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.851 [2024-11-26 04:17:37.378876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.378889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.851 [2024-11-26 04:17:37.378901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.378914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.851 [2024-11-26 04:17:37.378931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.378944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.851 [2024-11-26 04:17:37.378957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.378970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:55792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.851 [2024-11-26 04:17:37.378981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.378994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.851 [2024-11-26 04:17:37.379006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.379026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.851 [2024-11-26 04:17:37.379039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.379052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.851 [2024-11-26 04:17:37.379064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.379077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.851 [2024-11-26 04:17:37.379098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.379111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:55832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.851 [2024-11-26 04:17:37.379123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.379142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.851 [2024-11-26 04:17:37.379155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.379168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.851 [2024-11-26 04:17:37.379180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.379193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:55856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.851 [2024-11-26 04:17:37.379205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.379218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.851 [2024-11-26 04:17:37.379230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.379243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.851 [2024-11-26 04:17:37.379254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.379267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.851 [2024-11-26 04:17:37.379279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.851 [2024-11-26 04:17:37.379292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.851 [2024-11-26 04:17:37.379304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.852 [2024-11-26 04:17:37.379329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.852 [2024-11-26 04:17:37.379360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:55912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.379385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.379410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.379434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.379471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:55400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.379496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:55424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.379521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.379546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.379571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.379597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.379621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.852 [2024-11-26 04:17:37.379646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.852 [2024-11-26 04:17:37.379671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.852 [2024-11-26 04:17:37.379696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.852 [2024-11-26 04:17:37.379733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.379759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.852 [2024-11-26 04:17:37.379790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.852 [2024-11-26 04:17:37.379824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:55984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.379850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.852 [2024-11-26 04:17:37.379876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:56000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.379906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.852 [2024-11-26 04:17:37.379930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.852 [2024-11-26 04:17:37.379954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.852 [2024-11-26 04:17:37.379978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.379991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:55520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.380003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.380015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.380027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.380040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:55544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.380052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.380064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.380076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.380088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:55568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.380099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.380112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.380123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.380142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.380155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.380167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.380179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.380192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.852 [2024-11-26 04:17:37.380209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.380222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:56040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.380234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.380248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:56048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.380259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.380272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.852 [2024-11-26 04:17:37.380284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.380298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.380311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.380324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.852 [2024-11-26 04:17:37.380336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.852 [2024-11-26 04:17:37.380348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:56080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.852 [2024-11-26 04:17:37.380360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.380373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.853 [2024-11-26 04:17:37.380385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.380398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:56096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:37.380409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.380422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:37.380434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.380446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.853 [2024-11-26 04:17:37.380464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.380477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:56120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:37.380490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.380502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.853 [2024-11-26 04:17:37.380514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.380526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:56136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:37.380537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.380550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.853 [2024-11-26 04:17:37.380562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.380574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.853 [2024-11-26 04:17:37.380585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.380598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.853 [2024-11-26 04:17:37.380614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.380628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:56168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:37.380640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.380652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:37.380665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.380679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:56184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:37.380691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.380705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:37.380732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.380745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.853 [2024-11-26 04:17:37.380758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.380770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.853 [2024-11-26 04:17:37.380782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.380804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:37.380817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.380830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.853 [2024-11-26 04:17:37.380843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.380855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:37.380867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.380879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:37.380891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.380904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:56248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:37.380916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.380928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:37.380940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.380952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:56264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:37.380964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.380977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:37.380988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.381001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:56280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:37.381012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.381025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:55664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:37.381042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.381056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:37.381067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.381080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:37.381092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.381105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:37.381132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.381147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:55720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:37.381159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.381172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:55728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:37.381184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.381197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:37.381208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.381220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8db10 is same with the state(5) to be set 00:21:46.853 [2024-11-26 04:17:37.381234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.853 [2024-11-26 04:17:37.381243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.853 [2024-11-26 04:17:37.381252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55744 len:8 PRP1 0x0 PRP2 0x0 00:21:46.853 [2024-11-26 04:17:37.381263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:37.381295] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c8db10 was disconnected and freed. reset controller. 00:21:46.853 [2024-11-26 04:17:37.381309] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:46.853 [2024-11-26 04:17:37.381320] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.853 [2024-11-26 04:17:37.383518] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.853 [2024-11-26 04:17:37.383549] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ecb0 (9): Bad file descriptor 00:21:46.853 [2024-11-26 04:17:37.404699] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:46.853 [2024-11-26 04:17:41.942383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:41.942479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:41.942505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:41.942520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.853 [2024-11-26 04:17:41.942535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.853 [2024-11-26 04:17:41.942549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.942564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.942577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.942591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.942630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.942645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.942658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.942672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.942685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.942698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.942710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.942724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.942737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.942750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.942781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.942796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.942809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.942822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.942835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.942848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.942861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.942881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.942893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.942907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.942925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.942939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.942951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.942966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.854 [2024-11-26 04:17:41.942981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.854 [2024-11-26 04:17:41.943018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:108496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.943044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.943075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.943101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.943127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.943153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.943179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.943205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.943232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.943258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.943284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.943309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.943341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:108672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.943369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.943396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.943422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.943450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.943490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.854 [2024-11-26 04:17:41.943515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.943539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.943564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.943589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.854 [2024-11-26 04:17:41.943614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.943638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.854 [2024-11-26 04:17:41.943663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.854 [2024-11-26 04:17:41.943676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.855 [2024-11-26 04:17:41.943694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.943707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.855 [2024-11-26 04:17:41.943720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.943749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.855 [2024-11-26 04:17:41.943762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.943776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.855 [2024-11-26 04:17:41.943787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.943800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.855 [2024-11-26 04:17:41.943812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.943825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.855 [2024-11-26 04:17:41.943837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.943850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.855 [2024-11-26 04:17:41.943863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.943877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.855 [2024-11-26 04:17:41.943889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.943903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.855 [2024-11-26 04:17:41.943915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.943928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.855 [2024-11-26 04:17:41.943940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.943953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:108744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.855 [2024-11-26 04:17:41.943965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.943978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.855 [2024-11-26 04:17:41.943990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.944002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.855 [2024-11-26 04:17:41.944015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.944034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.855 [2024-11-26 04:17:41.944047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.944062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.855 [2024-11-26 04:17:41.944074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.944087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.855 [2024-11-26 04:17:41.944100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.944126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.855 [2024-11-26 04:17:41.944138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.944152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.855 [2024-11-26 04:17:41.944164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.944177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.855 [2024-11-26 04:17:41.944189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.944203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.855 [2024-11-26 04:17:41.944215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.944228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.855 [2024-11-26 04:17:41.944241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.944254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.855 [2024-11-26 04:17:41.944267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.944280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.855 [2024-11-26 04:17:41.944292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.944305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.855 [2024-11-26 04:17:41.944317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.944330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.855 [2024-11-26 04:17:41.944343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.944356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.855 [2024-11-26 04:17:41.944374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.944388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.855 [2024-11-26 04:17:41.944401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.944414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.855 [2024-11-26 04:17:41.944426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.944439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.855 [2024-11-26 04:17:41.944451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.944464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.855 [2024-11-26 04:17:41.944476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.944490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.855 [2024-11-26 04:17:41.944501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.944514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.855 [2024-11-26 04:17:41.944527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.944540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.855 [2024-11-26 04:17:41.944552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.944566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.855 [2024-11-26 04:17:41.944578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.855 [2024-11-26 04:17:41.944591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.855 [2024-11-26 04:17:41.944603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.944616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.944628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.944641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.944654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.944668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.944680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.944698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:108872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.944723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.944739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.944751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.944764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.944777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.944790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:108896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.944808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.944823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.856 [2024-11-26 04:17:41.944835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.944848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.944861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.944874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.944886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.944898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.856 [2024-11-26 04:17:41.944910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.944923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.856 [2024-11-26 04:17:41.944935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.944949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.944961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.944974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.944985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.944999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.945011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.945043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.945069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.945095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:108976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.945121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.945146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.945171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.945196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.945223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.856 [2024-11-26 04:17:41.945248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.856 [2024-11-26 04:17:41.945273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.856 [2024-11-26 04:17:41.945299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.856 [2024-11-26 04:17:41.945323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.856 [2024-11-26 04:17:41.945348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.856 [2024-11-26 04:17:41.945380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.856 [2024-11-26 04:17:41.945405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.945430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.945456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.945481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.945505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.856 [2024-11-26 04:17:41.945531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.856 [2024-11-26 04:17:41.945555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.945580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.945604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.856 [2024-11-26 04:17:41.945631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.856 [2024-11-26 04:17:41.945644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.856 [2024-11-26 04:17:41.945657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.857 [2024-11-26 04:17:41.945669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.857 [2024-11-26 04:17:41.945681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.857 [2024-11-26 04:17:41.945700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.857 [2024-11-26 04:17:41.945732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.857 [2024-11-26 04:17:41.945747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.857 [2024-11-26 04:17:41.945760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.857 [2024-11-26 04:17:41.945772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.857 [2024-11-26 04:17:41.945784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.857 [2024-11-26 04:17:41.945797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.857 [2024-11-26 04:17:41.945809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.857 [2024-11-26 04:17:41.945822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.857 [2024-11-26 04:17:41.945834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.857 [2024-11-26 04:17:41.945847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.857 [2024-11-26 04:17:41.945859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.857 [2024-11-26 04:17:41.945872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.857 [2024-11-26 04:17:41.945884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.857 [2024-11-26 04:17:41.945896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.857 [2024-11-26 04:17:41.945909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.857 [2024-11-26 04:17:41.945922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.857 [2024-11-26 04:17:41.945934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.857 [2024-11-26 04:17:41.945946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb5210 is same with the state(5) to be set 00:21:46.857 [2024-11-26 04:17:41.945960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.857 [2024-11-26 04:17:41.945968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.857 [2024-11-26 04:17:41.945977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109168 len:8 PRP1 0x0 PRP2 0x0 00:21:46.857 [2024-11-26 04:17:41.945988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.857 [2024-11-26 04:17:41.946080] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cb5210 was disconnected and freed. reset controller. 00:21:46.857 [2024-11-26 04:17:41.946106] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:46.857 [2024-11-26 04:17:41.946163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.857 [2024-11-26 04:17:41.946192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.857 [2024-11-26 04:17:41.946209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.857 [2024-11-26 04:17:41.946222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.857 [2024-11-26 04:17:41.946235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.857 [2024-11-26 04:17:41.946246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.857 [2024-11-26 04:17:41.946259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.857 [2024-11-26 04:17:41.946270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.857 [2024-11-26 04:17:41.946282] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.857 [2024-11-26 04:17:41.946331] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ecb0 (9): Bad file descriptor 00:21:46.857 [2024-11-26 04:17:41.948178] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.857 [2024-11-26 04:17:41.963477] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:46.857 00:21:46.857 Latency(us) 00:21:46.857 [2024-11-26T04:17:48.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.857 [2024-11-26T04:17:48.625Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:46.857 Verification LBA range: start 0x0 length 0x4000 00:21:46.857 NVMe0n1 : 15.01 15218.51 59.45 225.83 0.00 8273.12 525.03 16562.73 00:21:46.857 [2024-11-26T04:17:48.625Z] =================================================================================================================== 00:21:46.857 [2024-11-26T04:17:48.625Z] Total : 15218.51 59.45 225.83 0.00 8273.12 525.03 16562.73 00:21:46.857 Received shutdown signal, test time was about 15.000000 seconds 00:21:46.857 00:21:46.857 Latency(us) 00:21:46.857 [2024-11-26T04:17:48.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.857 [2024-11-26T04:17:48.625Z] =================================================================================================================== 00:21:46.857 [2024-11-26T04:17:48.625Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:46.857 04:17:47 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:46.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.857 04:17:47 -- host/failover.sh@65 -- # count=3 00:21:46.857 04:17:47 -- host/failover.sh@67 -- # (( count != 3 )) 00:21:46.857 04:17:47 -- host/failover.sh@73 -- # bdevperf_pid=95912 00:21:46.857 04:17:47 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:46.857 04:17:47 -- host/failover.sh@75 -- # waitforlisten 95912 /var/tmp/bdevperf.sock 00:21:46.857 04:17:47 -- common/autotest_common.sh@829 -- # '[' -z 95912 ']' 00:21:46.857 04:17:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.857 04:17:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:46.857 04:17:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.857 04:17:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:46.857 04:17:47 -- common/autotest_common.sh@10 -- # set +x 00:21:47.426 04:17:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:47.426 04:17:48 -- common/autotest_common.sh@862 -- # return 0 00:21:47.426 04:17:48 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:47.683 [2024-11-26 04:17:49.265050] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:47.683 04:17:49 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:47.942 [2024-11-26 04:17:49.477290] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:47.942 04:17:49 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:48.201 NVMe0n1 00:21:48.201 04:17:49 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:48.460 00:21:48.460 04:17:50 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:48.719 00:21:48.719 04:17:50 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:48.719 04:17:50 -- host/failover.sh@82 -- # grep -q NVMe0 00:21:48.978 04:17:50 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:49.236 04:17:50 -- host/failover.sh@87 -- # sleep 3 00:21:52.525 04:17:53 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:52.525 04:17:53 -- host/failover.sh@88 -- # grep -q NVMe0 00:21:52.525 04:17:53 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:52.525 04:17:53 -- host/failover.sh@90 -- # run_test_pid=96051 00:21:52.525 04:17:53 -- host/failover.sh@92 -- # wait 96051 00:21:53.463 0 00:21:53.463 04:17:55 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:53.463 [2024-11-26 04:17:48.038497] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:53.463 [2024-11-26 04:17:48.038601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95912 ] 00:21:53.463 [2024-11-26 04:17:48.166866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.463 [2024-11-26 04:17:48.235583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.463 [2024-11-26 04:17:50.727579] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:53.463 [2024-11-26 04:17:50.727692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.463 [2024-11-26 04:17:50.727737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.463 [2024-11-26 04:17:50.727754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.463 [2024-11-26 04:17:50.727767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.463 [2024-11-26 04:17:50.727779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.463 [2024-11-26 04:17:50.727791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.463 [2024-11-26 04:17:50.727803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.463 [2024-11-26 04:17:50.727814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.463 [2024-11-26 04:17:50.727826] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:53.463 [2024-11-26 04:17:50.727867] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:53.463 [2024-11-26 04:17:50.727894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f0cb0 (9): Bad file descriptor 00:21:53.463 [2024-11-26 04:17:50.734962] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:53.463 Running I/O for 1 seconds... 00:21:53.463 00:21:53.463 Latency(us) 00:21:53.463 [2024-11-26T04:17:55.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.463 [2024-11-26T04:17:55.231Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:53.463 Verification LBA range: start 0x0 length 0x4000 00:21:53.463 NVMe0n1 : 1.01 14866.56 58.07 0.00 0.00 8570.30 1124.54 13226.36 00:21:53.463 [2024-11-26T04:17:55.231Z] =================================================================================================================== 00:21:53.463 [2024-11-26T04:17:55.231Z] Total : 14866.56 58.07 0.00 0.00 8570.30 1124.54 13226.36 00:21:53.463 04:17:55 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:53.463 04:17:55 -- host/failover.sh@95 -- # grep -q NVMe0 00:21:53.722 04:17:55 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:53.981 04:17:55 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:53.981 04:17:55 -- host/failover.sh@99 -- # grep -q NVMe0 00:21:54.240 04:17:55 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:54.240 04:17:55 -- host/failover.sh@101 -- # sleep 3 00:21:57.528 04:17:58 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:57.528 04:17:58 -- host/failover.sh@103 -- # grep -q NVMe0 00:21:57.528 04:17:59 -- host/failover.sh@108 -- # killprocess 95912 00:21:57.528 04:17:59 -- common/autotest_common.sh@936 -- # '[' -z 95912 ']' 00:21:57.528 04:17:59 -- common/autotest_common.sh@940 -- # kill -0 95912 00:21:57.528 04:17:59 -- common/autotest_common.sh@941 -- # uname 00:21:57.528 04:17:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:57.528 04:17:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95912 00:21:57.528 killing process with pid 95912 00:21:57.528 04:17:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:57.528 04:17:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:57.528 04:17:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95912' 00:21:57.528 04:17:59 -- common/autotest_common.sh@955 -- # kill 95912 00:21:57.528 04:17:59 -- common/autotest_common.sh@960 -- # wait 95912 00:21:57.786 04:17:59 -- host/failover.sh@110 -- # sync 00:21:58.045 04:17:59 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:58.045 04:17:59 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:58.045 04:17:59 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:58.045 04:17:59 -- host/failover.sh@116 -- # nvmftestfini 00:21:58.045 04:17:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:58.045 04:17:59 -- nvmf/common.sh@116 -- # sync 00:21:58.045 04:17:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:58.045 04:17:59 -- nvmf/common.sh@119 -- # set +e 00:21:58.045 04:17:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:58.045 04:17:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:58.045 rmmod nvme_tcp 00:21:58.304 rmmod nvme_fabrics 00:21:58.304 rmmod nvme_keyring 00:21:58.304 04:17:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:58.304 04:17:59 -- nvmf/common.sh@123 -- # set -e 00:21:58.304 04:17:59 -- nvmf/common.sh@124 -- # return 0 00:21:58.304 04:17:59 -- nvmf/common.sh@477 -- # '[' -n 95547 ']' 00:21:58.304 04:17:59 -- nvmf/common.sh@478 -- # killprocess 95547 00:21:58.304 04:17:59 -- common/autotest_common.sh@936 -- # '[' -z 95547 ']' 00:21:58.304 04:17:59 -- common/autotest_common.sh@940 -- # kill -0 95547 00:21:58.304 04:17:59 -- common/autotest_common.sh@941 -- # uname 00:21:58.304 04:17:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:58.304 04:17:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95547 00:21:58.304 killing process with pid 95547 00:21:58.304 04:17:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:58.304 04:17:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:58.304 04:17:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95547' 00:21:58.304 04:17:59 -- common/autotest_common.sh@955 -- # kill 95547 00:21:58.304 04:17:59 -- common/autotest_common.sh@960 -- # wait 95547 00:21:58.563 04:18:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:58.563 04:18:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:58.563 04:18:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:58.563 04:18:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:58.563 04:18:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:58.563 04:18:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.563 04:18:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.563 04:18:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.563 04:18:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:58.563 00:21:58.563 real 0m32.454s 00:21:58.563 user 2m5.197s 00:21:58.563 sys 0m4.928s 00:21:58.563 04:18:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:58.563 04:18:00 -- common/autotest_common.sh@10 -- # set +x 00:21:58.563 ************************************ 00:21:58.563 END TEST nvmf_failover 00:21:58.563 ************************************ 00:21:58.563 04:18:00 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:58.563 04:18:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:58.563 04:18:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:58.563 04:18:00 -- common/autotest_common.sh@10 -- # set +x 00:21:58.563 ************************************ 00:21:58.563 START TEST nvmf_discovery 00:21:58.563 ************************************ 00:21:58.563 04:18:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:58.563 * Looking for test storage... 00:21:58.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:58.563 04:18:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:58.563 04:18:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:58.563 04:18:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:58.822 04:18:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:58.822 04:18:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:58.823 04:18:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:58.823 04:18:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:58.823 04:18:00 -- scripts/common.sh@335 -- # IFS=.-: 00:21:58.823 04:18:00 -- scripts/common.sh@335 -- # read -ra ver1 00:21:58.823 04:18:00 -- scripts/common.sh@336 -- # IFS=.-: 00:21:58.823 04:18:00 -- scripts/common.sh@336 -- # read -ra ver2 00:21:58.823 04:18:00 -- scripts/common.sh@337 -- # local 'op=<' 00:21:58.823 04:18:00 -- scripts/common.sh@339 -- # ver1_l=2 00:21:58.823 04:18:00 -- scripts/common.sh@340 -- # ver2_l=1 00:21:58.823 04:18:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:58.823 04:18:00 -- scripts/common.sh@343 -- # case "$op" in 00:21:58.823 04:18:00 -- scripts/common.sh@344 -- # : 1 00:21:58.823 04:18:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:58.823 04:18:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:58.823 04:18:00 -- scripts/common.sh@364 -- # decimal 1 00:21:58.823 04:18:00 -- scripts/common.sh@352 -- # local d=1 00:21:58.823 04:18:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:58.823 04:18:00 -- scripts/common.sh@354 -- # echo 1 00:21:58.823 04:18:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:58.823 04:18:00 -- scripts/common.sh@365 -- # decimal 2 00:21:58.823 04:18:00 -- scripts/common.sh@352 -- # local d=2 00:21:58.823 04:18:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:58.823 04:18:00 -- scripts/common.sh@354 -- # echo 2 00:21:58.823 04:18:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:58.823 04:18:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:58.823 04:18:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:58.823 04:18:00 -- scripts/common.sh@367 -- # return 0 00:21:58.823 04:18:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:58.823 04:18:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:58.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.823 --rc genhtml_branch_coverage=1 00:21:58.823 --rc genhtml_function_coverage=1 00:21:58.823 --rc genhtml_legend=1 00:21:58.823 --rc geninfo_all_blocks=1 00:21:58.823 --rc geninfo_unexecuted_blocks=1 00:21:58.823 00:21:58.823 ' 00:21:58.823 04:18:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:58.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.823 --rc genhtml_branch_coverage=1 00:21:58.823 --rc genhtml_function_coverage=1 00:21:58.823 --rc genhtml_legend=1 00:21:58.823 --rc geninfo_all_blocks=1 00:21:58.823 --rc geninfo_unexecuted_blocks=1 00:21:58.823 00:21:58.823 ' 00:21:58.823 04:18:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:58.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.823 --rc genhtml_branch_coverage=1 00:21:58.823 --rc genhtml_function_coverage=1 00:21:58.823 --rc genhtml_legend=1 00:21:58.823 --rc geninfo_all_blocks=1 00:21:58.823 --rc geninfo_unexecuted_blocks=1 00:21:58.823 00:21:58.823 ' 00:21:58.823 04:18:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:58.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.823 --rc genhtml_branch_coverage=1 00:21:58.823 --rc genhtml_function_coverage=1 00:21:58.823 --rc genhtml_legend=1 00:21:58.823 --rc geninfo_all_blocks=1 00:21:58.823 --rc geninfo_unexecuted_blocks=1 00:21:58.823 00:21:58.823 ' 00:21:58.823 04:18:00 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:58.823 04:18:00 -- nvmf/common.sh@7 -- # uname -s 00:21:58.823 04:18:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:58.823 04:18:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:58.823 04:18:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:58.823 04:18:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:58.823 04:18:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:58.823 04:18:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:58.823 04:18:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:58.823 04:18:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:58.823 04:18:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:58.823 04:18:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:58.823 04:18:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:21:58.823 04:18:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:21:58.823 04:18:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:58.823 04:18:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:58.823 04:18:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:58.823 04:18:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:58.823 04:18:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:58.823 04:18:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:58.823 04:18:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:58.823 04:18:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.823 04:18:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.823 04:18:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.823 04:18:00 -- paths/export.sh@5 -- # export PATH 00:21:58.823 04:18:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.823 04:18:00 -- nvmf/common.sh@46 -- # : 0 00:21:58.823 04:18:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:58.823 04:18:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:58.823 04:18:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:58.823 04:18:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:58.823 04:18:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:58.823 04:18:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:58.823 04:18:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:58.823 04:18:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:58.823 04:18:00 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:58.823 04:18:00 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:58.823 04:18:00 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:58.823 04:18:00 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:58.823 04:18:00 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:58.823 04:18:00 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:58.823 04:18:00 -- host/discovery.sh@25 -- # nvmftestinit 00:21:58.823 04:18:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:58.823 04:18:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:58.823 04:18:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:58.823 04:18:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:58.823 04:18:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:58.823 04:18:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.823 04:18:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.823 04:18:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.823 04:18:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:58.823 04:18:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:58.823 04:18:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:58.823 04:18:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:58.823 04:18:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:58.823 04:18:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:58.823 04:18:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.823 04:18:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.823 04:18:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:58.823 04:18:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:58.823 04:18:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:58.823 04:18:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:58.823 04:18:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:58.823 04:18:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.823 04:18:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:58.823 04:18:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:58.823 04:18:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:58.823 04:18:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:58.823 04:18:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:58.823 04:18:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:58.823 Cannot find device "nvmf_tgt_br" 00:21:58.823 04:18:00 -- nvmf/common.sh@154 -- # true 00:21:58.823 04:18:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:58.823 Cannot find device "nvmf_tgt_br2" 00:21:58.823 04:18:00 -- nvmf/common.sh@155 -- # true 00:21:58.823 04:18:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:58.823 04:18:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:58.823 Cannot find device "nvmf_tgt_br" 00:21:58.823 04:18:00 -- nvmf/common.sh@157 -- # true 00:21:58.823 04:18:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:58.823 Cannot find device "nvmf_tgt_br2" 00:21:58.823 04:18:00 -- nvmf/common.sh@158 -- # true 00:21:58.824 04:18:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:58.824 04:18:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:58.824 04:18:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:58.824 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:58.824 04:18:00 -- nvmf/common.sh@161 -- # true 00:21:58.824 04:18:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:58.824 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:58.824 04:18:00 -- nvmf/common.sh@162 -- # true 00:21:58.824 04:18:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:58.824 04:18:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:58.824 04:18:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:58.824 04:18:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:58.824 04:18:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:58.824 04:18:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:58.824 04:18:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:58.824 04:18:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:58.824 04:18:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:58.824 04:18:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:58.824 04:18:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:58.824 04:18:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:58.824 04:18:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:58.824 04:18:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:58.824 04:18:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:59.083 04:18:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:59.083 04:18:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:59.083 04:18:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:59.083 04:18:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:59.083 04:18:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:59.083 04:18:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:59.083 04:18:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:59.083 04:18:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:59.083 04:18:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:59.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:21:59.083 00:21:59.083 --- 10.0.0.2 ping statistics --- 00:21:59.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.083 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:21:59.083 04:18:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:59.083 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:59.083 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:21:59.083 00:21:59.083 --- 10.0.0.3 ping statistics --- 00:21:59.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.083 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:21:59.083 04:18:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:59.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:21:59.083 00:21:59.083 --- 10.0.0.1 ping statistics --- 00:21:59.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.083 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:59.083 04:18:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.083 04:18:00 -- nvmf/common.sh@421 -- # return 0 00:21:59.083 04:18:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:59.083 04:18:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.083 04:18:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:59.083 04:18:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:59.083 04:18:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.083 04:18:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:59.083 04:18:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:59.083 04:18:00 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:59.083 04:18:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:59.083 04:18:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:59.083 04:18:00 -- common/autotest_common.sh@10 -- # set +x 00:21:59.083 04:18:00 -- nvmf/common.sh@469 -- # nvmfpid=96361 00:21:59.083 04:18:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:59.083 04:18:00 -- nvmf/common.sh@470 -- # waitforlisten 96361 00:21:59.083 04:18:00 -- common/autotest_common.sh@829 -- # '[' -z 96361 ']' 00:21:59.083 04:18:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.083 04:18:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:59.083 04:18:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.083 04:18:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:59.083 04:18:00 -- common/autotest_common.sh@10 -- # set +x 00:21:59.083 [2024-11-26 04:18:00.747548] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:59.083 [2024-11-26 04:18:00.747633] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.342 [2024-11-26 04:18:00.886092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.342 [2024-11-26 04:18:00.943672] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:59.342 [2024-11-26 04:18:00.943852] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.342 [2024-11-26 04:18:00.943865] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.342 [2024-11-26 04:18:00.943874] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.342 [2024-11-26 04:18:00.943900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.277 04:18:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:00.277 04:18:01 -- common/autotest_common.sh@862 -- # return 0 00:22:00.277 04:18:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:00.277 04:18:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:00.277 04:18:01 -- common/autotest_common.sh@10 -- # set +x 00:22:00.277 04:18:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.277 04:18:01 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:00.277 04:18:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.277 04:18:01 -- common/autotest_common.sh@10 -- # set +x 00:22:00.277 [2024-11-26 04:18:01.782593] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.277 04:18:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.277 04:18:01 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:00.277 04:18:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.277 04:18:01 -- common/autotest_common.sh@10 -- # set +x 00:22:00.277 [2024-11-26 04:18:01.790797] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:00.277 04:18:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.277 04:18:01 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:00.277 04:18:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.277 04:18:01 -- common/autotest_common.sh@10 -- # set +x 00:22:00.277 null0 00:22:00.277 04:18:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.277 04:18:01 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:00.277 04:18:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.277 04:18:01 -- common/autotest_common.sh@10 -- # set +x 00:22:00.277 null1 00:22:00.277 04:18:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.278 04:18:01 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:00.278 04:18:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.278 04:18:01 -- common/autotest_common.sh@10 -- # set +x 00:22:00.278 04:18:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.278 04:18:01 -- host/discovery.sh@45 -- # hostpid=96411 00:22:00.278 04:18:01 -- host/discovery.sh@46 -- # waitforlisten 96411 /tmp/host.sock 00:22:00.278 04:18:01 -- common/autotest_common.sh@829 -- # '[' -z 96411 ']' 00:22:00.278 04:18:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:00.278 04:18:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:00.278 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:00.278 04:18:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:00.278 04:18:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:00.278 04:18:01 -- common/autotest_common.sh@10 -- # set +x 00:22:00.278 04:18:01 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:00.278 [2024-11-26 04:18:01.877602] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:00.278 [2024-11-26 04:18:01.877693] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96411 ] 00:22:00.278 [2024-11-26 04:18:02.018885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.537 [2024-11-26 04:18:02.121522] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:00.537 [2024-11-26 04:18:02.121669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.474 04:18:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:01.474 04:18:02 -- common/autotest_common.sh@862 -- # return 0 00:22:01.474 04:18:02 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:01.474 04:18:02 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:01.474 04:18:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.474 04:18:02 -- common/autotest_common.sh@10 -- # set +x 00:22:01.474 04:18:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.474 04:18:02 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:01.474 04:18:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.474 04:18:02 -- common/autotest_common.sh@10 -- # set +x 00:22:01.474 04:18:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.474 04:18:02 -- host/discovery.sh@72 -- # notify_id=0 00:22:01.474 04:18:02 -- host/discovery.sh@78 -- # get_subsystem_names 00:22:01.474 04:18:02 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:01.474 04:18:02 -- host/discovery.sh@59 -- # sort 00:22:01.474 04:18:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.474 04:18:02 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:01.474 04:18:02 -- common/autotest_common.sh@10 -- # set +x 00:22:01.474 04:18:02 -- host/discovery.sh@59 -- # xargs 00:22:01.474 04:18:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.474 04:18:02 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:22:01.474 04:18:02 -- host/discovery.sh@79 -- # get_bdev_list 00:22:01.474 04:18:02 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:01.474 04:18:02 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:01.474 04:18:02 -- host/discovery.sh@55 -- # sort 00:22:01.474 04:18:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.474 04:18:02 -- common/autotest_common.sh@10 -- # set +x 00:22:01.474 04:18:02 -- host/discovery.sh@55 -- # xargs 00:22:01.474 04:18:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.474 04:18:03 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:22:01.474 04:18:03 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:01.474 04:18:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.474 04:18:03 -- common/autotest_common.sh@10 -- # set +x 00:22:01.474 04:18:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.474 04:18:03 -- host/discovery.sh@82 -- # get_subsystem_names 00:22:01.475 04:18:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:01.475 04:18:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:01.475 04:18:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.475 04:18:03 -- common/autotest_common.sh@10 -- # set +x 00:22:01.475 04:18:03 -- host/discovery.sh@59 -- # sort 00:22:01.475 04:18:03 -- host/discovery.sh@59 -- # xargs 00:22:01.475 04:18:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.475 04:18:03 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:22:01.475 04:18:03 -- host/discovery.sh@83 -- # get_bdev_list 00:22:01.475 04:18:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:01.475 04:18:03 -- host/discovery.sh@55 -- # sort 00:22:01.475 04:18:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:01.475 04:18:03 -- host/discovery.sh@55 -- # xargs 00:22:01.475 04:18:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.475 04:18:03 -- common/autotest_common.sh@10 -- # set +x 00:22:01.475 04:18:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.475 04:18:03 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:01.475 04:18:03 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:01.475 04:18:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.475 04:18:03 -- common/autotest_common.sh@10 -- # set +x 00:22:01.475 04:18:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.475 04:18:03 -- host/discovery.sh@86 -- # get_subsystem_names 00:22:01.475 04:18:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:01.475 04:18:03 -- host/discovery.sh@59 -- # xargs 00:22:01.475 04:18:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:01.475 04:18:03 -- host/discovery.sh@59 -- # sort 00:22:01.475 04:18:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.475 04:18:03 -- common/autotest_common.sh@10 -- # set +x 00:22:01.475 04:18:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.475 04:18:03 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:22:01.475 04:18:03 -- host/discovery.sh@87 -- # get_bdev_list 00:22:01.475 04:18:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:01.475 04:18:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:01.475 04:18:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.475 04:18:03 -- common/autotest_common.sh@10 -- # set +x 00:22:01.475 04:18:03 -- host/discovery.sh@55 -- # sort 00:22:01.475 04:18:03 -- host/discovery.sh@55 -- # xargs 00:22:01.475 04:18:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.734 04:18:03 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:01.734 04:18:03 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:01.734 04:18:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.734 04:18:03 -- common/autotest_common.sh@10 -- # set +x 00:22:01.734 [2024-11-26 04:18:03.243046] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.734 04:18:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.734 04:18:03 -- host/discovery.sh@92 -- # get_subsystem_names 00:22:01.734 04:18:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:01.734 04:18:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.734 04:18:03 -- common/autotest_common.sh@10 -- # set +x 00:22:01.734 04:18:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:01.734 04:18:03 -- host/discovery.sh@59 -- # sort 00:22:01.734 04:18:03 -- host/discovery.sh@59 -- # xargs 00:22:01.734 04:18:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.734 04:18:03 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:01.734 04:18:03 -- host/discovery.sh@93 -- # get_bdev_list 00:22:01.734 04:18:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:01.734 04:18:03 -- host/discovery.sh@55 -- # xargs 00:22:01.734 04:18:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.734 04:18:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:01.734 04:18:03 -- host/discovery.sh@55 -- # sort 00:22:01.734 04:18:03 -- common/autotest_common.sh@10 -- # set +x 00:22:01.734 04:18:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.734 04:18:03 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:22:01.734 04:18:03 -- host/discovery.sh@94 -- # get_notification_count 00:22:01.734 04:18:03 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:01.734 04:18:03 -- host/discovery.sh@74 -- # jq '. | length' 00:22:01.734 04:18:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.734 04:18:03 -- common/autotest_common.sh@10 -- # set +x 00:22:01.734 04:18:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.734 04:18:03 -- host/discovery.sh@74 -- # notification_count=0 00:22:01.734 04:18:03 -- host/discovery.sh@75 -- # notify_id=0 00:22:01.734 04:18:03 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:22:01.734 04:18:03 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:01.734 04:18:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.734 04:18:03 -- common/autotest_common.sh@10 -- # set +x 00:22:01.734 04:18:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.734 04:18:03 -- host/discovery.sh@100 -- # sleep 1 00:22:02.302 [2024-11-26 04:18:03.895433] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:02.302 [2024-11-26 04:18:03.895475] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:02.302 [2024-11-26 04:18:03.895493] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:02.302 [2024-11-26 04:18:03.981527] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:02.302 [2024-11-26 04:18:04.037278] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:02.302 [2024-11-26 04:18:04.037305] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:02.870 04:18:04 -- host/discovery.sh@101 -- # get_subsystem_names 00:22:02.870 04:18:04 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:02.870 04:18:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.870 04:18:04 -- common/autotest_common.sh@10 -- # set +x 00:22:02.870 04:18:04 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:02.870 04:18:04 -- host/discovery.sh@59 -- # sort 00:22:02.870 04:18:04 -- host/discovery.sh@59 -- # xargs 00:22:02.870 04:18:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.870 04:18:04 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.870 04:18:04 -- host/discovery.sh@102 -- # get_bdev_list 00:22:02.870 04:18:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:02.870 04:18:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:02.870 04:18:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.870 04:18:04 -- common/autotest_common.sh@10 -- # set +x 00:22:02.870 04:18:04 -- host/discovery.sh@55 -- # sort 00:22:02.870 04:18:04 -- host/discovery.sh@55 -- # xargs 00:22:02.870 04:18:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.870 04:18:04 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:02.870 04:18:04 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:22:02.870 04:18:04 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:02.870 04:18:04 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:02.870 04:18:04 -- host/discovery.sh@63 -- # sort -n 00:22:02.870 04:18:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.870 04:18:04 -- common/autotest_common.sh@10 -- # set +x 00:22:02.870 04:18:04 -- host/discovery.sh@63 -- # xargs 00:22:02.870 04:18:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.870 04:18:04 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:22:02.870 04:18:04 -- host/discovery.sh@104 -- # get_notification_count 00:22:02.870 04:18:04 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:02.870 04:18:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.870 04:18:04 -- host/discovery.sh@74 -- # jq '. | length' 00:22:02.870 04:18:04 -- common/autotest_common.sh@10 -- # set +x 00:22:02.870 04:18:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.870 04:18:04 -- host/discovery.sh@74 -- # notification_count=1 00:22:02.870 04:18:04 -- host/discovery.sh@75 -- # notify_id=1 00:22:02.870 04:18:04 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:22:02.870 04:18:04 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:02.870 04:18:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.870 04:18:04 -- common/autotest_common.sh@10 -- # set +x 00:22:03.129 04:18:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.129 04:18:04 -- host/discovery.sh@109 -- # sleep 1 00:22:04.065 04:18:05 -- host/discovery.sh@110 -- # get_bdev_list 00:22:04.065 04:18:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.065 04:18:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:04.065 04:18:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.065 04:18:05 -- host/discovery.sh@55 -- # sort 00:22:04.065 04:18:05 -- host/discovery.sh@55 -- # xargs 00:22:04.065 04:18:05 -- common/autotest_common.sh@10 -- # set +x 00:22:04.065 04:18:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.065 04:18:05 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:04.065 04:18:05 -- host/discovery.sh@111 -- # get_notification_count 00:22:04.065 04:18:05 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:04.065 04:18:05 -- host/discovery.sh@74 -- # jq '. | length' 00:22:04.065 04:18:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.065 04:18:05 -- common/autotest_common.sh@10 -- # set +x 00:22:04.065 04:18:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.065 04:18:05 -- host/discovery.sh@74 -- # notification_count=1 00:22:04.065 04:18:05 -- host/discovery.sh@75 -- # notify_id=2 00:22:04.065 04:18:05 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:22:04.065 04:18:05 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:04.065 04:18:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.065 04:18:05 -- common/autotest_common.sh@10 -- # set +x 00:22:04.065 [2024-11-26 04:18:05.756352] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:04.065 [2024-11-26 04:18:05.757385] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:04.065 [2024-11-26 04:18:05.757413] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:04.065 04:18:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.065 04:18:05 -- host/discovery.sh@117 -- # sleep 1 00:22:04.324 [2024-11-26 04:18:05.843444] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:04.324 [2024-11-26 04:18:05.907622] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:04.324 [2024-11-26 04:18:05.907645] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:04.324 [2024-11-26 04:18:05.907651] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:05.260 04:18:06 -- host/discovery.sh@118 -- # get_subsystem_names 00:22:05.260 04:18:06 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:05.260 04:18:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.260 04:18:06 -- common/autotest_common.sh@10 -- # set +x 00:22:05.260 04:18:06 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:05.260 04:18:06 -- host/discovery.sh@59 -- # sort 00:22:05.260 04:18:06 -- host/discovery.sh@59 -- # xargs 00:22:05.260 04:18:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.260 04:18:06 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.260 04:18:06 -- host/discovery.sh@119 -- # get_bdev_list 00:22:05.260 04:18:06 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.260 04:18:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.260 04:18:06 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:05.260 04:18:06 -- common/autotest_common.sh@10 -- # set +x 00:22:05.260 04:18:06 -- host/discovery.sh@55 -- # sort 00:22:05.260 04:18:06 -- host/discovery.sh@55 -- # xargs 00:22:05.260 04:18:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.260 04:18:06 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:05.260 04:18:06 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:22:05.260 04:18:06 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:05.260 04:18:06 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:05.260 04:18:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.260 04:18:06 -- common/autotest_common.sh@10 -- # set +x 00:22:05.260 04:18:06 -- host/discovery.sh@63 -- # sort -n 00:22:05.260 04:18:06 -- host/discovery.sh@63 -- # xargs 00:22:05.260 04:18:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.260 04:18:06 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:05.260 04:18:06 -- host/discovery.sh@121 -- # get_notification_count 00:22:05.260 04:18:06 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:05.260 04:18:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.260 04:18:06 -- common/autotest_common.sh@10 -- # set +x 00:22:05.260 04:18:06 -- host/discovery.sh@74 -- # jq '. | length' 00:22:05.260 04:18:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.260 04:18:06 -- host/discovery.sh@74 -- # notification_count=0 00:22:05.260 04:18:06 -- host/discovery.sh@75 -- # notify_id=2 00:22:05.260 04:18:06 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:22:05.260 04:18:06 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:05.260 04:18:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.260 04:18:06 -- common/autotest_common.sh@10 -- # set +x 00:22:05.260 [2024-11-26 04:18:06.981322] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:05.260 [2024-11-26 04:18:06.981348] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:05.260 04:18:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.260 [2024-11-26 04:18:06.986024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.260 04:18:06 -- host/discovery.sh@127 -- # sleep 1 00:22:05.260 [2024-11-26 04:18:06.986068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.260 [2024-11-26 04:18:06.986080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.260 [2024-11-26 04:18:06.986089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.260 [2024-11-26 04:18:06.986098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.260 [2024-11-26 04:18:06.986106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.260 [2024-11-26 04:18:06.986115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.260 [2024-11-26 04:18:06.986123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.260 [2024-11-26 04:18:06.986132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206b570 is same with the state(5) to be set 00:22:05.260 [2024-11-26 04:18:06.995947] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206b570 (9): Bad file descriptor 00:22:05.260 [2024-11-26 04:18:07.005962] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:05.260 [2024-11-26 04:18:07.006055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.260 [2024-11-26 04:18:07.006096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.260 [2024-11-26 04:18:07.006112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206b570 with addr=10.0.0.2, port=4420 00:22:05.260 [2024-11-26 04:18:07.006121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206b570 is same with the state(5) to be set 00:22:05.260 [2024-11-26 04:18:07.006135] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206b570 (9): Bad file descriptor 00:22:05.260 [2024-11-26 04:18:07.006148] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:05.260 [2024-11-26 04:18:07.006157] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:05.260 [2024-11-26 04:18:07.006166] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:05.260 [2024-11-26 04:18:07.006179] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.260 [2024-11-26 04:18:07.016014] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:05.260 [2024-11-26 04:18:07.016086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.260 [2024-11-26 04:18:07.016124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.260 [2024-11-26 04:18:07.016138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206b570 with addr=10.0.0.2, port=4420 00:22:05.260 [2024-11-26 04:18:07.016147] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206b570 is same with the state(5) to be set 00:22:05.260 [2024-11-26 04:18:07.016160] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206b570 (9): Bad file descriptor 00:22:05.260 [2024-11-26 04:18:07.016173] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:05.260 [2024-11-26 04:18:07.016181] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:05.260 [2024-11-26 04:18:07.016189] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:05.260 [2024-11-26 04:18:07.016201] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.520 [2024-11-26 04:18:07.026077] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:05.520 [2024-11-26 04:18:07.026161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.520 [2024-11-26 04:18:07.026204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.520 [2024-11-26 04:18:07.026221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206b570 with addr=10.0.0.2, port=4420 00:22:05.520 [2024-11-26 04:18:07.026231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206b570 is same with the state(5) to be set 00:22:05.520 [2024-11-26 04:18:07.026247] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206b570 (9): Bad file descriptor 00:22:05.520 [2024-11-26 04:18:07.026261] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:05.520 [2024-11-26 04:18:07.026270] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:05.520 [2024-11-26 04:18:07.026279] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:05.520 [2024-11-26 04:18:07.026308] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.520 [2024-11-26 04:18:07.036124] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:05.520 [2024-11-26 04:18:07.036192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.520 [2024-11-26 04:18:07.036230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.520 [2024-11-26 04:18:07.036244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206b570 with addr=10.0.0.2, port=4420 00:22:05.520 [2024-11-26 04:18:07.036254] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206b570 is same with the state(5) to be set 00:22:05.520 [2024-11-26 04:18:07.036267] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206b570 (9): Bad file descriptor 00:22:05.520 [2024-11-26 04:18:07.036280] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:05.520 [2024-11-26 04:18:07.036290] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:05.520 [2024-11-26 04:18:07.036297] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:05.520 [2024-11-26 04:18:07.036310] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.520 [2024-11-26 04:18:07.046167] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:05.520 [2024-11-26 04:18:07.046246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.520 [2024-11-26 04:18:07.046284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.520 [2024-11-26 04:18:07.046299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206b570 with addr=10.0.0.2, port=4420 00:22:05.520 [2024-11-26 04:18:07.046308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206b570 is same with the state(5) to be set 00:22:05.520 [2024-11-26 04:18:07.046322] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206b570 (9): Bad file descriptor 00:22:05.520 [2024-11-26 04:18:07.046349] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:05.520 [2024-11-26 04:18:07.046357] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:05.520 [2024-11-26 04:18:07.046364] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:05.520 [2024-11-26 04:18:07.046376] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.520 [2024-11-26 04:18:07.056220] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:05.520 [2024-11-26 04:18:07.056283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.520 [2024-11-26 04:18:07.056319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.520 [2024-11-26 04:18:07.056332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206b570 with addr=10.0.0.2, port=4420 00:22:05.520 [2024-11-26 04:18:07.056342] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206b570 is same with the state(5) to be set 00:22:05.520 [2024-11-26 04:18:07.056356] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206b570 (9): Bad file descriptor 00:22:05.520 [2024-11-26 04:18:07.056368] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:05.520 [2024-11-26 04:18:07.056376] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:05.520 [2024-11-26 04:18:07.056384] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:05.520 [2024-11-26 04:18:07.056396] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.520 [2024-11-26 04:18:07.066259] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:05.520 [2024-11-26 04:18:07.066323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.520 [2024-11-26 04:18:07.066360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.520 [2024-11-26 04:18:07.066374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206b570 with addr=10.0.0.2, port=4420 00:22:05.520 [2024-11-26 04:18:07.066383] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206b570 is same with the state(5) to be set 00:22:05.520 [2024-11-26 04:18:07.066397] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206b570 (9): Bad file descriptor 00:22:05.520 [2024-11-26 04:18:07.066409] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:05.520 [2024-11-26 04:18:07.066417] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:05.520 [2024-11-26 04:18:07.066425] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:05.520 [2024-11-26 04:18:07.066437] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.520 [2024-11-26 04:18:07.067460] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:05.520 [2024-11-26 04:18:07.067485] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:06.456 04:18:07 -- host/discovery.sh@128 -- # get_subsystem_names 00:22:06.456 04:18:07 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:06.456 04:18:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.456 04:18:07 -- common/autotest_common.sh@10 -- # set +x 00:22:06.456 04:18:07 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:06.456 04:18:07 -- host/discovery.sh@59 -- # sort 00:22:06.456 04:18:07 -- host/discovery.sh@59 -- # xargs 00:22:06.456 04:18:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.456 04:18:08 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.456 04:18:08 -- host/discovery.sh@129 -- # get_bdev_list 00:22:06.456 04:18:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:06.456 04:18:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.456 04:18:08 -- common/autotest_common.sh@10 -- # set +x 00:22:06.456 04:18:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:06.456 04:18:08 -- host/discovery.sh@55 -- # sort 00:22:06.456 04:18:08 -- host/discovery.sh@55 -- # xargs 00:22:06.456 04:18:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.456 04:18:08 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:06.456 04:18:08 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:22:06.456 04:18:08 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:06.456 04:18:08 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:06.456 04:18:08 -- host/discovery.sh@63 -- # sort -n 00:22:06.456 04:18:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.456 04:18:08 -- host/discovery.sh@63 -- # xargs 00:22:06.456 04:18:08 -- common/autotest_common.sh@10 -- # set +x 00:22:06.456 04:18:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.456 04:18:08 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:22:06.456 04:18:08 -- host/discovery.sh@131 -- # get_notification_count 00:22:06.456 04:18:08 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:06.456 04:18:08 -- host/discovery.sh@74 -- # jq '. | length' 00:22:06.456 04:18:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.456 04:18:08 -- common/autotest_common.sh@10 -- # set +x 00:22:06.456 04:18:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.456 04:18:08 -- host/discovery.sh@74 -- # notification_count=0 00:22:06.456 04:18:08 -- host/discovery.sh@75 -- # notify_id=2 00:22:06.456 04:18:08 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:22:06.456 04:18:08 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:06.456 04:18:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.456 04:18:08 -- common/autotest_common.sh@10 -- # set +x 00:22:06.714 04:18:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.714 04:18:08 -- host/discovery.sh@135 -- # sleep 1 00:22:07.646 04:18:09 -- host/discovery.sh@136 -- # get_subsystem_names 00:22:07.646 04:18:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:07.646 04:18:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:07.646 04:18:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.646 04:18:09 -- common/autotest_common.sh@10 -- # set +x 00:22:07.646 04:18:09 -- host/discovery.sh@59 -- # sort 00:22:07.646 04:18:09 -- host/discovery.sh@59 -- # xargs 00:22:07.646 04:18:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.646 04:18:09 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:22:07.646 04:18:09 -- host/discovery.sh@137 -- # get_bdev_list 00:22:07.646 04:18:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:07.646 04:18:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.646 04:18:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:07.646 04:18:09 -- common/autotest_common.sh@10 -- # set +x 00:22:07.646 04:18:09 -- host/discovery.sh@55 -- # sort 00:22:07.646 04:18:09 -- host/discovery.sh@55 -- # xargs 00:22:07.646 04:18:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.646 04:18:09 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:22:07.646 04:18:09 -- host/discovery.sh@138 -- # get_notification_count 00:22:07.646 04:18:09 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:07.646 04:18:09 -- host/discovery.sh@74 -- # jq '. | length' 00:22:07.646 04:18:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.646 04:18:09 -- common/autotest_common.sh@10 -- # set +x 00:22:07.646 04:18:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.646 04:18:09 -- host/discovery.sh@74 -- # notification_count=2 00:22:07.646 04:18:09 -- host/discovery.sh@75 -- # notify_id=4 00:22:07.646 04:18:09 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:22:07.646 04:18:09 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:07.646 04:18:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.646 04:18:09 -- common/autotest_common.sh@10 -- # set +x 00:22:09.096 [2024-11-26 04:18:10.406087] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:09.096 [2024-11-26 04:18:10.406113] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:09.096 [2024-11-26 04:18:10.406129] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:09.096 [2024-11-26 04:18:10.492173] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:09.096 [2024-11-26 04:18:10.551193] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:09.096 [2024-11-26 04:18:10.551228] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:09.096 04:18:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.096 04:18:10 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:09.096 04:18:10 -- common/autotest_common.sh@650 -- # local es=0 00:22:09.096 04:18:10 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:09.096 04:18:10 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:09.096 04:18:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:09.096 04:18:10 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:09.096 04:18:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:09.096 04:18:10 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:09.096 04:18:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.096 04:18:10 -- common/autotest_common.sh@10 -- # set +x 00:22:09.096 2024/11/26 04:18:10 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:09.096 request: 00:22:09.096 { 00:22:09.096 "method": "bdev_nvme_start_discovery", 00:22:09.096 "params": { 00:22:09.096 "name": "nvme", 00:22:09.096 "trtype": "tcp", 00:22:09.096 "traddr": "10.0.0.2", 00:22:09.096 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:09.096 "adrfam": "ipv4", 00:22:09.096 "trsvcid": "8009", 00:22:09.096 "wait_for_attach": true 00:22:09.096 } 00:22:09.096 } 00:22:09.096 Got JSON-RPC error response 00:22:09.096 GoRPCClient: error on JSON-RPC call 00:22:09.096 04:18:10 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:09.096 04:18:10 -- common/autotest_common.sh@653 -- # es=1 00:22:09.096 04:18:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:09.096 04:18:10 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:09.096 04:18:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:09.097 04:18:10 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:22:09.097 04:18:10 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:09.097 04:18:10 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:09.097 04:18:10 -- host/discovery.sh@67 -- # xargs 00:22:09.097 04:18:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.097 04:18:10 -- host/discovery.sh@67 -- # sort 00:22:09.097 04:18:10 -- common/autotest_common.sh@10 -- # set +x 00:22:09.097 04:18:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.097 04:18:10 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:22:09.097 04:18:10 -- host/discovery.sh@147 -- # get_bdev_list 00:22:09.097 04:18:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.097 04:18:10 -- host/discovery.sh@55 -- # sort 00:22:09.097 04:18:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:09.097 04:18:10 -- host/discovery.sh@55 -- # xargs 00:22:09.097 04:18:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.097 04:18:10 -- common/autotest_common.sh@10 -- # set +x 00:22:09.097 04:18:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.097 04:18:10 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:09.097 04:18:10 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:09.097 04:18:10 -- common/autotest_common.sh@650 -- # local es=0 00:22:09.097 04:18:10 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:09.097 04:18:10 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:09.097 04:18:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:09.097 04:18:10 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:09.097 04:18:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:09.097 04:18:10 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:09.097 04:18:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.097 04:18:10 -- common/autotest_common.sh@10 -- # set +x 00:22:09.097 2024/11/26 04:18:10 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:09.097 request: 00:22:09.097 { 00:22:09.097 "method": "bdev_nvme_start_discovery", 00:22:09.097 "params": { 00:22:09.097 "name": "nvme_second", 00:22:09.097 "trtype": "tcp", 00:22:09.097 "traddr": "10.0.0.2", 00:22:09.097 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:09.097 "adrfam": "ipv4", 00:22:09.097 "trsvcid": "8009", 00:22:09.097 "wait_for_attach": true 00:22:09.097 } 00:22:09.097 } 00:22:09.097 Got JSON-RPC error response 00:22:09.097 GoRPCClient: error on JSON-RPC call 00:22:09.097 04:18:10 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:09.097 04:18:10 -- common/autotest_common.sh@653 -- # es=1 00:22:09.097 04:18:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:09.097 04:18:10 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:09.097 04:18:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:09.097 04:18:10 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:22:09.097 04:18:10 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:09.097 04:18:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.097 04:18:10 -- common/autotest_common.sh@10 -- # set +x 00:22:09.097 04:18:10 -- host/discovery.sh@67 -- # sort 00:22:09.097 04:18:10 -- host/discovery.sh@67 -- # xargs 00:22:09.097 04:18:10 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:09.097 04:18:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.097 04:18:10 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:22:09.097 04:18:10 -- host/discovery.sh@153 -- # get_bdev_list 00:22:09.097 04:18:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.097 04:18:10 -- host/discovery.sh@55 -- # sort 00:22:09.097 04:18:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:09.097 04:18:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.097 04:18:10 -- host/discovery.sh@55 -- # xargs 00:22:09.097 04:18:10 -- common/autotest_common.sh@10 -- # set +x 00:22:09.097 04:18:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.097 04:18:10 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:09.097 04:18:10 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:09.097 04:18:10 -- common/autotest_common.sh@650 -- # local es=0 00:22:09.097 04:18:10 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:09.097 04:18:10 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:09.097 04:18:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:09.097 04:18:10 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:09.097 04:18:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:09.097 04:18:10 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:09.097 04:18:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.097 04:18:10 -- common/autotest_common.sh@10 -- # set +x 00:22:10.077 [2024-11-26 04:18:11.813385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.077 [2024-11-26 04:18:11.813451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.077 [2024-11-26 04:18:11.813468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2106f80 with addr=10.0.0.2, port=8010 00:22:10.077 [2024-11-26 04:18:11.813483] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:10.077 [2024-11-26 04:18:11.813492] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:10.077 [2024-11-26 04:18:11.813500] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:11.455 [2024-11-26 04:18:12.813362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.455 [2024-11-26 04:18:12.813421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.455 [2024-11-26 04:18:12.813437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20dfca0 with addr=10.0.0.2, port=8010 00:22:11.455 [2024-11-26 04:18:12.813448] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:11.455 [2024-11-26 04:18:12.813456] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:11.455 [2024-11-26 04:18:12.813464] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:12.392 [2024-11-26 04:18:13.813298] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:12.392 2024/11/26 04:18:13 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:22:12.392 request: 00:22:12.392 { 00:22:12.392 "method": "bdev_nvme_start_discovery", 00:22:12.392 "params": { 00:22:12.392 "name": "nvme_second", 00:22:12.392 "trtype": "tcp", 00:22:12.392 "traddr": "10.0.0.2", 00:22:12.392 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:12.392 "adrfam": "ipv4", 00:22:12.392 "trsvcid": "8010", 00:22:12.392 "attach_timeout_ms": 3000 00:22:12.392 } 00:22:12.392 } 00:22:12.392 Got JSON-RPC error response 00:22:12.392 GoRPCClient: error on JSON-RPC call 00:22:12.392 04:18:13 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:12.392 04:18:13 -- common/autotest_common.sh@653 -- # es=1 00:22:12.392 04:18:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:12.392 04:18:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:12.392 04:18:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:12.392 04:18:13 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:22:12.392 04:18:13 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:12.392 04:18:13 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:12.392 04:18:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.392 04:18:13 -- common/autotest_common.sh@10 -- # set +x 00:22:12.392 04:18:13 -- host/discovery.sh@67 -- # sort 00:22:12.392 04:18:13 -- host/discovery.sh@67 -- # xargs 00:22:12.392 04:18:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.392 04:18:13 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:22:12.392 04:18:13 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:22:12.392 04:18:13 -- host/discovery.sh@162 -- # kill 96411 00:22:12.392 04:18:13 -- host/discovery.sh@163 -- # nvmftestfini 00:22:12.392 04:18:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:12.392 04:18:13 -- nvmf/common.sh@116 -- # sync 00:22:12.392 04:18:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:12.392 04:18:13 -- nvmf/common.sh@119 -- # set +e 00:22:12.392 04:18:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:12.392 04:18:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:12.392 rmmod nvme_tcp 00:22:12.392 rmmod nvme_fabrics 00:22:12.392 rmmod nvme_keyring 00:22:12.392 04:18:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:12.392 04:18:13 -- nvmf/common.sh@123 -- # set -e 00:22:12.392 04:18:13 -- nvmf/common.sh@124 -- # return 0 00:22:12.392 04:18:13 -- nvmf/common.sh@477 -- # '[' -n 96361 ']' 00:22:12.392 04:18:13 -- nvmf/common.sh@478 -- # killprocess 96361 00:22:12.392 04:18:13 -- common/autotest_common.sh@936 -- # '[' -z 96361 ']' 00:22:12.392 04:18:13 -- common/autotest_common.sh@940 -- # kill -0 96361 00:22:12.392 04:18:13 -- common/autotest_common.sh@941 -- # uname 00:22:12.392 04:18:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:12.392 04:18:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96361 00:22:12.392 04:18:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:12.392 killing process with pid 96361 00:22:12.392 04:18:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:12.392 04:18:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96361' 00:22:12.392 04:18:13 -- common/autotest_common.sh@955 -- # kill 96361 00:22:12.392 04:18:13 -- common/autotest_common.sh@960 -- # wait 96361 00:22:12.653 04:18:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:12.653 04:18:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:12.653 04:18:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:12.653 04:18:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:12.653 04:18:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:12.653 04:18:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.653 04:18:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:12.653 04:18:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.653 04:18:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:12.653 00:22:12.653 real 0m14.053s 00:22:12.653 user 0m27.467s 00:22:12.653 sys 0m1.768s 00:22:12.653 04:18:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:12.653 04:18:14 -- common/autotest_common.sh@10 -- # set +x 00:22:12.653 ************************************ 00:22:12.653 END TEST nvmf_discovery 00:22:12.653 ************************************ 00:22:12.653 04:18:14 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:12.653 04:18:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:12.653 04:18:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:12.653 04:18:14 -- common/autotest_common.sh@10 -- # set +x 00:22:12.654 ************************************ 00:22:12.654 START TEST nvmf_discovery_remove_ifc 00:22:12.654 ************************************ 00:22:12.654 04:18:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:12.654 * Looking for test storage... 00:22:12.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:12.654 04:18:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:12.654 04:18:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:12.654 04:18:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:12.913 04:18:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:12.913 04:18:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:12.913 04:18:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:12.913 04:18:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:12.913 04:18:14 -- scripts/common.sh@335 -- # IFS=.-: 00:22:12.913 04:18:14 -- scripts/common.sh@335 -- # read -ra ver1 00:22:12.913 04:18:14 -- scripts/common.sh@336 -- # IFS=.-: 00:22:12.913 04:18:14 -- scripts/common.sh@336 -- # read -ra ver2 00:22:12.913 04:18:14 -- scripts/common.sh@337 -- # local 'op=<' 00:22:12.913 04:18:14 -- scripts/common.sh@339 -- # ver1_l=2 00:22:12.913 04:18:14 -- scripts/common.sh@340 -- # ver2_l=1 00:22:12.913 04:18:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:12.913 04:18:14 -- scripts/common.sh@343 -- # case "$op" in 00:22:12.913 04:18:14 -- scripts/common.sh@344 -- # : 1 00:22:12.913 04:18:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:12.913 04:18:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:12.913 04:18:14 -- scripts/common.sh@364 -- # decimal 1 00:22:12.913 04:18:14 -- scripts/common.sh@352 -- # local d=1 00:22:12.913 04:18:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:12.913 04:18:14 -- scripts/common.sh@354 -- # echo 1 00:22:12.913 04:18:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:12.913 04:18:14 -- scripts/common.sh@365 -- # decimal 2 00:22:12.913 04:18:14 -- scripts/common.sh@352 -- # local d=2 00:22:12.913 04:18:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:12.913 04:18:14 -- scripts/common.sh@354 -- # echo 2 00:22:12.913 04:18:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:12.913 04:18:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:12.913 04:18:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:12.913 04:18:14 -- scripts/common.sh@367 -- # return 0 00:22:12.913 04:18:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:12.913 04:18:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:12.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.913 --rc genhtml_branch_coverage=1 00:22:12.913 --rc genhtml_function_coverage=1 00:22:12.913 --rc genhtml_legend=1 00:22:12.913 --rc geninfo_all_blocks=1 00:22:12.913 --rc geninfo_unexecuted_blocks=1 00:22:12.913 00:22:12.913 ' 00:22:12.913 04:18:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:12.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.913 --rc genhtml_branch_coverage=1 00:22:12.913 --rc genhtml_function_coverage=1 00:22:12.913 --rc genhtml_legend=1 00:22:12.913 --rc geninfo_all_blocks=1 00:22:12.913 --rc geninfo_unexecuted_blocks=1 00:22:12.913 00:22:12.913 ' 00:22:12.913 04:18:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:12.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.913 --rc genhtml_branch_coverage=1 00:22:12.913 --rc genhtml_function_coverage=1 00:22:12.913 --rc genhtml_legend=1 00:22:12.913 --rc geninfo_all_blocks=1 00:22:12.913 --rc geninfo_unexecuted_blocks=1 00:22:12.913 00:22:12.913 ' 00:22:12.913 04:18:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:12.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.913 --rc genhtml_branch_coverage=1 00:22:12.913 --rc genhtml_function_coverage=1 00:22:12.913 --rc genhtml_legend=1 00:22:12.913 --rc geninfo_all_blocks=1 00:22:12.913 --rc geninfo_unexecuted_blocks=1 00:22:12.913 00:22:12.913 ' 00:22:12.913 04:18:14 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:12.913 04:18:14 -- nvmf/common.sh@7 -- # uname -s 00:22:12.913 04:18:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:12.913 04:18:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:12.913 04:18:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:12.913 04:18:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:12.913 04:18:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:12.913 04:18:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:12.913 04:18:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:12.913 04:18:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:12.913 04:18:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:12.913 04:18:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:12.913 04:18:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:22:12.913 04:18:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:22:12.913 04:18:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:12.913 04:18:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:12.913 04:18:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:12.913 04:18:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:12.913 04:18:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.913 04:18:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.913 04:18:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.913 04:18:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.913 04:18:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.914 04:18:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.914 04:18:14 -- paths/export.sh@5 -- # export PATH 00:22:12.914 04:18:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.914 04:18:14 -- nvmf/common.sh@46 -- # : 0 00:22:12.914 04:18:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:12.914 04:18:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:12.914 04:18:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:12.914 04:18:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:12.914 04:18:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:12.914 04:18:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:12.914 04:18:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:12.914 04:18:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:12.914 04:18:14 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:12.914 04:18:14 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:12.914 04:18:14 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:12.914 04:18:14 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:12.914 04:18:14 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:12.914 04:18:14 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:12.914 04:18:14 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:12.914 04:18:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:12.914 04:18:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:12.914 04:18:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:12.914 04:18:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:12.914 04:18:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:12.914 04:18:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.914 04:18:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:12.914 04:18:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.914 04:18:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:12.914 04:18:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:12.914 04:18:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:12.914 04:18:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:12.914 04:18:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:12.914 04:18:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:12.914 04:18:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.914 04:18:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:12.914 04:18:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:12.914 04:18:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:12.914 04:18:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:12.914 04:18:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:12.914 04:18:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:12.914 04:18:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.914 04:18:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:12.914 04:18:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:12.914 04:18:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:12.914 04:18:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:12.914 04:18:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:12.914 04:18:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:12.914 Cannot find device "nvmf_tgt_br" 00:22:12.914 04:18:14 -- nvmf/common.sh@154 -- # true 00:22:12.914 04:18:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:12.914 Cannot find device "nvmf_tgt_br2" 00:22:12.914 04:18:14 -- nvmf/common.sh@155 -- # true 00:22:12.914 04:18:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:12.914 04:18:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:12.914 Cannot find device "nvmf_tgt_br" 00:22:12.914 04:18:14 -- nvmf/common.sh@157 -- # true 00:22:12.914 04:18:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:12.914 Cannot find device "nvmf_tgt_br2" 00:22:12.914 04:18:14 -- nvmf/common.sh@158 -- # true 00:22:12.914 04:18:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:12.914 04:18:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:12.914 04:18:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:12.914 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:12.914 04:18:14 -- nvmf/common.sh@161 -- # true 00:22:12.914 04:18:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:12.914 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:12.914 04:18:14 -- nvmf/common.sh@162 -- # true 00:22:12.914 04:18:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:12.914 04:18:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:12.914 04:18:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:13.173 04:18:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:13.173 04:18:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:13.173 04:18:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:13.173 04:18:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:13.173 04:18:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:13.173 04:18:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:13.173 04:18:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:13.173 04:18:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:13.173 04:18:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:13.173 04:18:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:13.173 04:18:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:13.173 04:18:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:13.173 04:18:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:13.173 04:18:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:13.173 04:18:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:13.173 04:18:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:13.174 04:18:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:13.174 04:18:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:13.174 04:18:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:13.174 04:18:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:13.174 04:18:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:13.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:13.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:22:13.174 00:22:13.174 --- 10.0.0.2 ping statistics --- 00:22:13.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.174 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:22:13.174 04:18:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:13.174 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:13.174 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:22:13.174 00:22:13.174 --- 10.0.0.3 ping statistics --- 00:22:13.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.174 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:22:13.174 04:18:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:13.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:13.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:22:13.174 00:22:13.174 --- 10.0.0.1 ping statistics --- 00:22:13.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.174 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:22:13.174 04:18:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:13.174 04:18:14 -- nvmf/common.sh@421 -- # return 0 00:22:13.174 04:18:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:13.174 04:18:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:13.174 04:18:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:13.174 04:18:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:13.174 04:18:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:13.174 04:18:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:13.174 04:18:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:13.174 04:18:14 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:13.174 04:18:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:13.174 04:18:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:13.174 04:18:14 -- common/autotest_common.sh@10 -- # set +x 00:22:13.174 04:18:14 -- nvmf/common.sh@469 -- # nvmfpid=96925 00:22:13.174 04:18:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:13.174 04:18:14 -- nvmf/common.sh@470 -- # waitforlisten 96925 00:22:13.174 04:18:14 -- common/autotest_common.sh@829 -- # '[' -z 96925 ']' 00:22:13.174 04:18:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.174 04:18:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:13.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.174 04:18:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.174 04:18:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:13.174 04:18:14 -- common/autotest_common.sh@10 -- # set +x 00:22:13.174 [2024-11-26 04:18:14.916364] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:13.174 [2024-11-26 04:18:14.916448] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.432 [2024-11-26 04:18:15.055459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.432 [2024-11-26 04:18:15.109742] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:13.432 [2024-11-26 04:18:15.109892] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:13.432 [2024-11-26 04:18:15.109904] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:13.432 [2024-11-26 04:18:15.109913] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:13.432 [2024-11-26 04:18:15.109937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.366 04:18:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:14.366 04:18:15 -- common/autotest_common.sh@862 -- # return 0 00:22:14.366 04:18:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:14.366 04:18:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:14.366 04:18:15 -- common/autotest_common.sh@10 -- # set +x 00:22:14.366 04:18:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.366 04:18:15 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:14.366 04:18:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.366 04:18:15 -- common/autotest_common.sh@10 -- # set +x 00:22:14.366 [2024-11-26 04:18:15.903794] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.366 [2024-11-26 04:18:15.911949] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:14.366 null0 00:22:14.367 [2024-11-26 04:18:15.943872] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.367 04:18:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.367 04:18:15 -- host/discovery_remove_ifc.sh@59 -- # hostpid=96975 00:22:14.367 04:18:15 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:14.367 04:18:15 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 96975 /tmp/host.sock 00:22:14.367 04:18:15 -- common/autotest_common.sh@829 -- # '[' -z 96975 ']' 00:22:14.367 04:18:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:14.367 04:18:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:14.367 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:14.367 04:18:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:14.367 04:18:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:14.367 04:18:15 -- common/autotest_common.sh@10 -- # set +x 00:22:14.367 [2024-11-26 04:18:16.019099] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:14.367 [2024-11-26 04:18:16.019193] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96975 ] 00:22:14.626 [2024-11-26 04:18:16.162379] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.626 [2024-11-26 04:18:16.237570] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:14.626 [2024-11-26 04:18:16.237809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.626 04:18:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:14.626 04:18:16 -- common/autotest_common.sh@862 -- # return 0 00:22:14.626 04:18:16 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:14.626 04:18:16 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:14.626 04:18:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.626 04:18:16 -- common/autotest_common.sh@10 -- # set +x 00:22:14.626 04:18:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.626 04:18:16 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:14.626 04:18:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.626 04:18:16 -- common/autotest_common.sh@10 -- # set +x 00:22:14.884 04:18:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.884 04:18:16 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:14.884 04:18:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.884 04:18:16 -- common/autotest_common.sh@10 -- # set +x 00:22:15.821 [2024-11-26 04:18:17.416678] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:15.821 [2024-11-26 04:18:17.416708] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:15.821 [2024-11-26 04:18:17.416731] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:15.821 [2024-11-26 04:18:17.503775] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:15.821 [2024-11-26 04:18:17.559458] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:15.821 [2024-11-26 04:18:17.559505] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:15.821 [2024-11-26 04:18:17.559532] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:15.821 [2024-11-26 04:18:17.559546] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:15.821 [2024-11-26 04:18:17.559564] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:15.822 04:18:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.822 04:18:17 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:15.822 04:18:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:15.822 04:18:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:15.822 [2024-11-26 04:18:17.565236] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x87eda0 was disconnected and freed. delete nvme_qpair. 00:22:15.822 04:18:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.822 04:18:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:15.822 04:18:17 -- common/autotest_common.sh@10 -- # set +x 00:22:15.822 04:18:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:15.822 04:18:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:15.822 04:18:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.081 04:18:17 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:16.081 04:18:17 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:22:16.081 04:18:17 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:16.081 04:18:17 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:16.081 04:18:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:16.081 04:18:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:16.081 04:18:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:16.081 04:18:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.081 04:18:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:16.081 04:18:17 -- common/autotest_common.sh@10 -- # set +x 00:22:16.081 04:18:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:16.081 04:18:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.081 04:18:17 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:16.081 04:18:17 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:17.017 04:18:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:17.017 04:18:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:17.017 04:18:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.017 04:18:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:17.017 04:18:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:17.017 04:18:18 -- common/autotest_common.sh@10 -- # set +x 00:22:17.017 04:18:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:17.017 04:18:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.017 04:18:18 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:17.017 04:18:18 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:18.393 04:18:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:18.393 04:18:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:18.393 04:18:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.393 04:18:19 -- common/autotest_common.sh@10 -- # set +x 00:22:18.393 04:18:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:18.393 04:18:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:18.393 04:18:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:18.393 04:18:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.393 04:18:19 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:18.393 04:18:19 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:19.328 04:18:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:19.328 04:18:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:19.328 04:18:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.328 04:18:20 -- common/autotest_common.sh@10 -- # set +x 00:22:19.328 04:18:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:19.328 04:18:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:19.328 04:18:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:19.328 04:18:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.328 04:18:20 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:19.328 04:18:20 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:20.264 04:18:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:20.264 04:18:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:20.264 04:18:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.264 04:18:21 -- common/autotest_common.sh@10 -- # set +x 00:22:20.264 04:18:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:20.264 04:18:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:20.264 04:18:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:20.264 04:18:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.264 04:18:21 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:20.264 04:18:21 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:21.200 04:18:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:21.200 04:18:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:21.200 04:18:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.200 04:18:22 -- common/autotest_common.sh@10 -- # set +x 00:22:21.200 04:18:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:21.200 04:18:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:21.200 04:18:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:21.459 04:18:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.459 [2024-11-26 04:18:22.987392] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:21.459 [2024-11-26 04:18:22.987622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.459 [2024-11-26 04:18:22.987761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.459 [2024-11-26 04:18:22.987995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.459 [2024-11-26 04:18:22.988060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.459 [2024-11-26 04:18:22.988199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.460 [2024-11-26 04:18:22.988249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.460 [2024-11-26 04:18:22.988352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.460 [2024-11-26 04:18:22.988468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.460 [2024-11-26 04:18:22.988625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.460 [2024-11-26 04:18:22.988643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.460 [2024-11-26 04:18:22.988652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e8690 is same with the state(5) to be set 00:22:21.460 [2024-11-26 04:18:22.997388] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e8690 (9): Bad file descriptor 00:22:21.460 04:18:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:21.460 04:18:23 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:21.460 [2024-11-26 04:18:23.007408] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:22.395 04:18:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:22.395 04:18:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:22.395 04:18:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:22.395 04:18:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.395 04:18:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:22.395 04:18:24 -- common/autotest_common.sh@10 -- # set +x 00:22:22.395 04:18:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:22.395 [2024-11-26 04:18:24.019830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:23.330 [2024-11-26 04:18:25.043846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:23.330 [2024-11-26 04:18:25.044192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7e8690 with addr=10.0.0.2, port=4420 00:22:23.330 [2024-11-26 04:18:25.044444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e8690 is same with the state(5) to be set 00:22:23.330 [2024-11-26 04:18:25.044500] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:23.330 [2024-11-26 04:18:25.044523] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:23.330 [2024-11-26 04:18:25.044543] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:23.330 [2024-11-26 04:18:25.044563] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:23.330 [2024-11-26 04:18:25.045323] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e8690 (9): Bad file descriptor 00:22:23.330 [2024-11-26 04:18:25.045401] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:23.330 [2024-11-26 04:18:25.045457] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:23.330 [2024-11-26 04:18:25.045522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.330 [2024-11-26 04:18:25.045554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.330 [2024-11-26 04:18:25.045579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.330 [2024-11-26 04:18:25.045600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.330 [2024-11-26 04:18:25.045622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.330 [2024-11-26 04:18:25.045643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.330 [2024-11-26 04:18:25.045666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.330 [2024-11-26 04:18:25.045686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.330 [2024-11-26 04:18:25.045708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.330 [2024-11-26 04:18:25.045758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.330 [2024-11-26 04:18:25.045780] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:23.330 [2024-11-26 04:18:25.045844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x846410 (9): Bad file descriptor 00:22:23.330 [2024-11-26 04:18:25.046845] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:23.330 [2024-11-26 04:18:25.046889] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:23.330 04:18:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.330 04:18:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:23.330 04:18:25 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:24.706 04:18:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:24.706 04:18:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.706 04:18:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:24.706 04:18:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.706 04:18:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:24.706 04:18:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:24.706 04:18:26 -- common/autotest_common.sh@10 -- # set +x 00:22:24.706 04:18:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.706 04:18:26 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:24.706 04:18:26 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:24.706 04:18:26 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:24.706 04:18:26 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:24.706 04:18:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:24.706 04:18:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:24.706 04:18:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.706 04:18:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.706 04:18:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:24.706 04:18:26 -- common/autotest_common.sh@10 -- # set +x 00:22:24.706 04:18:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:24.706 04:18:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.706 04:18:26 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:24.707 04:18:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:25.644 [2024-11-26 04:18:27.057925] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:25.644 [2024-11-26 04:18:27.057946] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:25.644 [2024-11-26 04:18:27.057962] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:25.644 [2024-11-26 04:18:27.144013] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:25.644 04:18:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:25.644 [2024-11-26 04:18:27.199071] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:25.644 [2024-11-26 04:18:27.199112] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:25.644 [2024-11-26 04:18:27.199132] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:25.644 [2024-11-26 04:18:27.199146] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:25.644 [2024-11-26 04:18:27.199153] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:25.644 04:18:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:25.644 04:18:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:25.644 04:18:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.644 04:18:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:25.644 04:18:27 -- common/autotest_common.sh@10 -- # set +x 00:22:25.644 04:18:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:25.644 [2024-11-26 04:18:27.206579] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x84c0c0 was disconnected and freed. delete nvme_qpair. 00:22:25.644 04:18:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.644 04:18:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:25.644 04:18:27 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:25.644 04:18:27 -- host/discovery_remove_ifc.sh@90 -- # killprocess 96975 00:22:25.645 04:18:27 -- common/autotest_common.sh@936 -- # '[' -z 96975 ']' 00:22:25.645 04:18:27 -- common/autotest_common.sh@940 -- # kill -0 96975 00:22:25.645 04:18:27 -- common/autotest_common.sh@941 -- # uname 00:22:25.645 04:18:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:25.645 04:18:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96975 00:22:25.645 killing process with pid 96975 00:22:25.645 04:18:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:25.645 04:18:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:25.645 04:18:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96975' 00:22:25.645 04:18:27 -- common/autotest_common.sh@955 -- # kill 96975 00:22:25.645 04:18:27 -- common/autotest_common.sh@960 -- # wait 96975 00:22:25.904 04:18:27 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:25.904 04:18:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:25.904 04:18:27 -- nvmf/common.sh@116 -- # sync 00:22:25.904 04:18:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:25.904 04:18:27 -- nvmf/common.sh@119 -- # set +e 00:22:25.904 04:18:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:25.904 04:18:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:25.904 rmmod nvme_tcp 00:22:25.904 rmmod nvme_fabrics 00:22:25.904 rmmod nvme_keyring 00:22:25.904 04:18:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:25.904 04:18:27 -- nvmf/common.sh@123 -- # set -e 00:22:25.904 04:18:27 -- nvmf/common.sh@124 -- # return 0 00:22:25.904 04:18:27 -- nvmf/common.sh@477 -- # '[' -n 96925 ']' 00:22:25.904 04:18:27 -- nvmf/common.sh@478 -- # killprocess 96925 00:22:25.904 04:18:27 -- common/autotest_common.sh@936 -- # '[' -z 96925 ']' 00:22:25.904 04:18:27 -- common/autotest_common.sh@940 -- # kill -0 96925 00:22:25.904 04:18:27 -- common/autotest_common.sh@941 -- # uname 00:22:25.904 04:18:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:25.904 04:18:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96925 00:22:26.162 killing process with pid 96925 00:22:26.162 04:18:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:26.162 04:18:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:26.162 04:18:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96925' 00:22:26.162 04:18:27 -- common/autotest_common.sh@955 -- # kill 96925 00:22:26.162 04:18:27 -- common/autotest_common.sh@960 -- # wait 96925 00:22:26.162 04:18:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:26.162 04:18:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:26.162 04:18:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:26.162 04:18:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:26.162 04:18:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:26.162 04:18:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.162 04:18:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.162 04:18:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.162 04:18:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:26.162 ************************************ 00:22:26.162 END TEST nvmf_discovery_remove_ifc 00:22:26.162 ************************************ 00:22:26.162 00:22:26.162 real 0m13.599s 00:22:26.162 user 0m22.990s 00:22:26.162 sys 0m1.503s 00:22:26.162 04:18:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:26.162 04:18:27 -- common/autotest_common.sh@10 -- # set +x 00:22:26.422 04:18:27 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:22:26.422 04:18:27 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:26.422 04:18:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:26.422 04:18:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:26.422 04:18:27 -- common/autotest_common.sh@10 -- # set +x 00:22:26.422 ************************************ 00:22:26.422 START TEST nvmf_digest 00:22:26.422 ************************************ 00:22:26.422 04:18:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:26.422 * Looking for test storage... 00:22:26.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:26.422 04:18:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:26.422 04:18:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:26.422 04:18:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:26.422 04:18:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:26.422 04:18:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:26.422 04:18:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:26.422 04:18:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:26.422 04:18:28 -- scripts/common.sh@335 -- # IFS=.-: 00:22:26.422 04:18:28 -- scripts/common.sh@335 -- # read -ra ver1 00:22:26.422 04:18:28 -- scripts/common.sh@336 -- # IFS=.-: 00:22:26.422 04:18:28 -- scripts/common.sh@336 -- # read -ra ver2 00:22:26.422 04:18:28 -- scripts/common.sh@337 -- # local 'op=<' 00:22:26.422 04:18:28 -- scripts/common.sh@339 -- # ver1_l=2 00:22:26.422 04:18:28 -- scripts/common.sh@340 -- # ver2_l=1 00:22:26.422 04:18:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:26.422 04:18:28 -- scripts/common.sh@343 -- # case "$op" in 00:22:26.422 04:18:28 -- scripts/common.sh@344 -- # : 1 00:22:26.422 04:18:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:26.422 04:18:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:26.422 04:18:28 -- scripts/common.sh@364 -- # decimal 1 00:22:26.422 04:18:28 -- scripts/common.sh@352 -- # local d=1 00:22:26.422 04:18:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:26.422 04:18:28 -- scripts/common.sh@354 -- # echo 1 00:22:26.422 04:18:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:26.422 04:18:28 -- scripts/common.sh@365 -- # decimal 2 00:22:26.422 04:18:28 -- scripts/common.sh@352 -- # local d=2 00:22:26.422 04:18:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:26.422 04:18:28 -- scripts/common.sh@354 -- # echo 2 00:22:26.422 04:18:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:26.422 04:18:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:26.422 04:18:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:26.422 04:18:28 -- scripts/common.sh@367 -- # return 0 00:22:26.422 04:18:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:26.422 04:18:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:26.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.422 --rc genhtml_branch_coverage=1 00:22:26.422 --rc genhtml_function_coverage=1 00:22:26.422 --rc genhtml_legend=1 00:22:26.422 --rc geninfo_all_blocks=1 00:22:26.422 --rc geninfo_unexecuted_blocks=1 00:22:26.422 00:22:26.422 ' 00:22:26.422 04:18:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:26.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.422 --rc genhtml_branch_coverage=1 00:22:26.422 --rc genhtml_function_coverage=1 00:22:26.422 --rc genhtml_legend=1 00:22:26.422 --rc geninfo_all_blocks=1 00:22:26.422 --rc geninfo_unexecuted_blocks=1 00:22:26.422 00:22:26.422 ' 00:22:26.422 04:18:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:26.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.422 --rc genhtml_branch_coverage=1 00:22:26.422 --rc genhtml_function_coverage=1 00:22:26.422 --rc genhtml_legend=1 00:22:26.422 --rc geninfo_all_blocks=1 00:22:26.422 --rc geninfo_unexecuted_blocks=1 00:22:26.422 00:22:26.422 ' 00:22:26.422 04:18:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:26.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.422 --rc genhtml_branch_coverage=1 00:22:26.422 --rc genhtml_function_coverage=1 00:22:26.422 --rc genhtml_legend=1 00:22:26.422 --rc geninfo_all_blocks=1 00:22:26.422 --rc geninfo_unexecuted_blocks=1 00:22:26.422 00:22:26.422 ' 00:22:26.422 04:18:28 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:26.422 04:18:28 -- nvmf/common.sh@7 -- # uname -s 00:22:26.422 04:18:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.422 04:18:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.422 04:18:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.422 04:18:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.422 04:18:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.422 04:18:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.422 04:18:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.422 04:18:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.422 04:18:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.422 04:18:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.422 04:18:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:22:26.422 04:18:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:22:26.422 04:18:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.422 04:18:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.422 04:18:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:26.422 04:18:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:26.422 04:18:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.422 04:18:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.422 04:18:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.422 04:18:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.422 04:18:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.422 04:18:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.422 04:18:28 -- paths/export.sh@5 -- # export PATH 00:22:26.422 04:18:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.422 04:18:28 -- nvmf/common.sh@46 -- # : 0 00:22:26.422 04:18:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:26.422 04:18:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:26.422 04:18:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:26.422 04:18:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.422 04:18:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.422 04:18:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:26.422 04:18:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:26.422 04:18:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:26.422 04:18:28 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:26.422 04:18:28 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:26.422 04:18:28 -- host/digest.sh@16 -- # runtime=2 00:22:26.422 04:18:28 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:22:26.422 04:18:28 -- host/digest.sh@132 -- # nvmftestinit 00:22:26.422 04:18:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:26.422 04:18:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.422 04:18:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:26.422 04:18:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:26.422 04:18:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:26.422 04:18:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.422 04:18:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.422 04:18:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.422 04:18:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:26.422 04:18:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:26.422 04:18:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:26.422 04:18:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:26.422 04:18:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:26.422 04:18:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:26.422 04:18:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.422 04:18:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.422 04:18:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:26.422 04:18:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:26.422 04:18:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:26.423 04:18:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:26.423 04:18:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:26.423 04:18:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.423 04:18:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:26.423 04:18:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:26.423 04:18:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:26.423 04:18:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:26.423 04:18:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:26.423 04:18:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:26.682 Cannot find device "nvmf_tgt_br" 00:22:26.682 04:18:28 -- nvmf/common.sh@154 -- # true 00:22:26.682 04:18:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:26.682 Cannot find device "nvmf_tgt_br2" 00:22:26.682 04:18:28 -- nvmf/common.sh@155 -- # true 00:22:26.682 04:18:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:26.682 04:18:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:26.682 Cannot find device "nvmf_tgt_br" 00:22:26.682 04:18:28 -- nvmf/common.sh@157 -- # true 00:22:26.682 04:18:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:26.682 Cannot find device "nvmf_tgt_br2" 00:22:26.682 04:18:28 -- nvmf/common.sh@158 -- # true 00:22:26.682 04:18:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:26.682 04:18:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:26.682 04:18:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:26.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:26.682 04:18:28 -- nvmf/common.sh@161 -- # true 00:22:26.682 04:18:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:26.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:26.682 04:18:28 -- nvmf/common.sh@162 -- # true 00:22:26.682 04:18:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:26.682 04:18:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:26.682 04:18:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:26.682 04:18:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:26.682 04:18:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:26.682 04:18:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:26.682 04:18:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:26.682 04:18:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:26.682 04:18:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:26.682 04:18:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:26.682 04:18:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:26.682 04:18:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:26.682 04:18:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:26.682 04:18:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:26.682 04:18:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:26.682 04:18:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:26.682 04:18:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:26.682 04:18:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:26.682 04:18:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:26.942 04:18:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:26.942 04:18:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:26.942 04:18:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:26.942 04:18:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:26.942 04:18:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:26.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:22:26.942 00:22:26.942 --- 10.0.0.2 ping statistics --- 00:22:26.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.942 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:22:26.942 04:18:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:26.942 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:26.942 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:22:26.942 00:22:26.942 --- 10.0.0.3 ping statistics --- 00:22:26.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.942 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:22:26.942 04:18:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:26.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:22:26.942 00:22:26.942 --- 10.0.0.1 ping statistics --- 00:22:26.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.942 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:22:26.942 04:18:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.942 04:18:28 -- nvmf/common.sh@421 -- # return 0 00:22:26.942 04:18:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:26.942 04:18:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.942 04:18:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:26.942 04:18:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:26.942 04:18:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.942 04:18:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:26.942 04:18:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:26.942 04:18:28 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:26.942 04:18:28 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:22:26.942 04:18:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:26.942 04:18:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:26.942 04:18:28 -- common/autotest_common.sh@10 -- # set +x 00:22:26.942 ************************************ 00:22:26.942 START TEST nvmf_digest_clean 00:22:26.942 ************************************ 00:22:26.942 04:18:28 -- common/autotest_common.sh@1114 -- # run_digest 00:22:26.942 04:18:28 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:22:26.942 04:18:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:26.942 04:18:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:26.942 04:18:28 -- common/autotest_common.sh@10 -- # set +x 00:22:26.942 04:18:28 -- nvmf/common.sh@469 -- # nvmfpid=97377 00:22:26.942 04:18:28 -- nvmf/common.sh@470 -- # waitforlisten 97377 00:22:26.942 04:18:28 -- common/autotest_common.sh@829 -- # '[' -z 97377 ']' 00:22:26.942 04:18:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.942 04:18:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:26.942 04:18:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:26.942 04:18:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.942 04:18:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:26.942 04:18:28 -- common/autotest_common.sh@10 -- # set +x 00:22:26.942 [2024-11-26 04:18:28.598755] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:26.942 [2024-11-26 04:18:28.598841] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.201 [2024-11-26 04:18:28.742754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.201 [2024-11-26 04:18:28.827282] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:27.201 [2024-11-26 04:18:28.827467] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.201 [2024-11-26 04:18:28.827497] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.201 [2024-11-26 04:18:28.827509] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.201 [2024-11-26 04:18:28.827543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.136 04:18:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:28.136 04:18:29 -- common/autotest_common.sh@862 -- # return 0 00:22:28.136 04:18:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:28.136 04:18:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:28.136 04:18:29 -- common/autotest_common.sh@10 -- # set +x 00:22:28.136 04:18:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.136 04:18:29 -- host/digest.sh@120 -- # common_target_config 00:22:28.136 04:18:29 -- host/digest.sh@43 -- # rpc_cmd 00:22:28.136 04:18:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.136 04:18:29 -- common/autotest_common.sh@10 -- # set +x 00:22:28.136 null0 00:22:28.136 [2024-11-26 04:18:29.763320] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.136 [2024-11-26 04:18:29.787470] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.136 04:18:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.136 04:18:29 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:22:28.136 04:18:29 -- host/digest.sh@77 -- # local rw bs qd 00:22:28.136 04:18:29 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:28.136 04:18:29 -- host/digest.sh@80 -- # rw=randread 00:22:28.136 04:18:29 -- host/digest.sh@80 -- # bs=4096 00:22:28.136 04:18:29 -- host/digest.sh@80 -- # qd=128 00:22:28.136 04:18:29 -- host/digest.sh@82 -- # bperfpid=97433 00:22:28.136 04:18:29 -- host/digest.sh@83 -- # waitforlisten 97433 /var/tmp/bperf.sock 00:22:28.136 04:18:29 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:28.136 04:18:29 -- common/autotest_common.sh@829 -- # '[' -z 97433 ']' 00:22:28.136 04:18:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:28.136 04:18:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:28.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:28.136 04:18:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:28.136 04:18:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:28.136 04:18:29 -- common/autotest_common.sh@10 -- # set +x 00:22:28.136 [2024-11-26 04:18:29.834675] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:28.136 [2024-11-26 04:18:29.834786] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97433 ] 00:22:28.395 [2024-11-26 04:18:29.969537] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.395 [2024-11-26 04:18:30.055777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.395 04:18:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:28.395 04:18:30 -- common/autotest_common.sh@862 -- # return 0 00:22:28.395 04:18:30 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:28.395 04:18:30 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:28.395 04:18:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:28.654 04:18:30 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:28.654 04:18:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:28.912 nvme0n1 00:22:28.912 04:18:30 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:28.912 04:18:30 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:29.171 Running I/O for 2 seconds... 00:22:31.073 00:22:31.073 Latency(us) 00:22:31.073 [2024-11-26T04:18:32.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.073 [2024-11-26T04:18:32.841Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:31.073 nvme0n1 : 2.00 24311.62 94.97 0.00 0.00 5260.75 2189.50 10664.49 00:22:31.073 [2024-11-26T04:18:32.841Z] =================================================================================================================== 00:22:31.073 [2024-11-26T04:18:32.841Z] Total : 24311.62 94.97 0.00 0.00 5260.75 2189.50 10664.49 00:22:31.073 0 00:22:31.073 04:18:32 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:31.073 04:18:32 -- host/digest.sh@92 -- # get_accel_stats 00:22:31.073 04:18:32 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:31.073 | select(.opcode=="crc32c") 00:22:31.073 | "\(.module_name) \(.executed)"' 00:22:31.073 04:18:32 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:31.073 04:18:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:31.332 04:18:33 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:31.332 04:18:33 -- host/digest.sh@93 -- # exp_module=software 00:22:31.332 04:18:33 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:31.332 04:18:33 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:31.332 04:18:33 -- host/digest.sh@97 -- # killprocess 97433 00:22:31.332 04:18:33 -- common/autotest_common.sh@936 -- # '[' -z 97433 ']' 00:22:31.332 04:18:33 -- common/autotest_common.sh@940 -- # kill -0 97433 00:22:31.332 04:18:33 -- common/autotest_common.sh@941 -- # uname 00:22:31.332 04:18:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:31.332 04:18:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97433 00:22:31.332 04:18:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:31.332 04:18:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:31.332 04:18:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97433' 00:22:31.332 killing process with pid 97433 00:22:31.332 Received shutdown signal, test time was about 2.000000 seconds 00:22:31.332 00:22:31.332 Latency(us) 00:22:31.332 [2024-11-26T04:18:33.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.332 [2024-11-26T04:18:33.100Z] =================================================================================================================== 00:22:31.332 [2024-11-26T04:18:33.100Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:31.332 04:18:33 -- common/autotest_common.sh@955 -- # kill 97433 00:22:31.332 04:18:33 -- common/autotest_common.sh@960 -- # wait 97433 00:22:31.591 04:18:33 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:22:31.591 04:18:33 -- host/digest.sh@77 -- # local rw bs qd 00:22:31.591 04:18:33 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:31.591 04:18:33 -- host/digest.sh@80 -- # rw=randread 00:22:31.591 04:18:33 -- host/digest.sh@80 -- # bs=131072 00:22:31.591 04:18:33 -- host/digest.sh@80 -- # qd=16 00:22:31.591 04:18:33 -- host/digest.sh@82 -- # bperfpid=97504 00:22:31.591 04:18:33 -- host/digest.sh@83 -- # waitforlisten 97504 /var/tmp/bperf.sock 00:22:31.591 04:18:33 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:31.591 04:18:33 -- common/autotest_common.sh@829 -- # '[' -z 97504 ']' 00:22:31.591 04:18:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:31.591 04:18:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:31.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:31.591 04:18:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:31.591 04:18:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:31.591 04:18:33 -- common/autotest_common.sh@10 -- # set +x 00:22:31.591 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:31.591 Zero copy mechanism will not be used. 00:22:31.591 [2024-11-26 04:18:33.292988] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:31.591 [2024-11-26 04:18:33.293093] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97504 ] 00:22:31.849 [2024-11-26 04:18:33.430800] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.849 [2024-11-26 04:18:33.493445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.785 04:18:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:32.785 04:18:34 -- common/autotest_common.sh@862 -- # return 0 00:22:32.785 04:18:34 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:32.785 04:18:34 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:32.785 04:18:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:32.785 04:18:34 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:32.785 04:18:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:33.352 nvme0n1 00:22:33.352 04:18:34 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:33.352 04:18:34 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:33.352 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:33.352 Zero copy mechanism will not be used. 00:22:33.352 Running I/O for 2 seconds... 00:22:35.255 00:22:35.255 Latency(us) 00:22:35.255 [2024-11-26T04:18:37.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.255 [2024-11-26T04:18:37.023Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:35.255 nvme0n1 : 2.00 9143.38 1142.92 0.00 0.00 1747.26 614.40 5868.45 00:22:35.255 [2024-11-26T04:18:37.023Z] =================================================================================================================== 00:22:35.255 [2024-11-26T04:18:37.023Z] Total : 9143.38 1142.92 0.00 0.00 1747.26 614.40 5868.45 00:22:35.255 0 00:22:35.255 04:18:36 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:35.255 04:18:36 -- host/digest.sh@92 -- # get_accel_stats 00:22:35.255 04:18:36 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:35.255 04:18:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:35.255 04:18:36 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:35.255 | select(.opcode=="crc32c") 00:22:35.255 | "\(.module_name) \(.executed)"' 00:22:35.514 04:18:37 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:35.514 04:18:37 -- host/digest.sh@93 -- # exp_module=software 00:22:35.514 04:18:37 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:35.514 04:18:37 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:35.514 04:18:37 -- host/digest.sh@97 -- # killprocess 97504 00:22:35.514 04:18:37 -- common/autotest_common.sh@936 -- # '[' -z 97504 ']' 00:22:35.514 04:18:37 -- common/autotest_common.sh@940 -- # kill -0 97504 00:22:35.514 04:18:37 -- common/autotest_common.sh@941 -- # uname 00:22:35.514 04:18:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:35.514 04:18:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97504 00:22:35.514 04:18:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:35.514 04:18:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:35.514 killing process with pid 97504 00:22:35.514 04:18:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97504' 00:22:35.514 04:18:37 -- common/autotest_common.sh@955 -- # kill 97504 00:22:35.514 Received shutdown signal, test time was about 2.000000 seconds 00:22:35.514 00:22:35.514 Latency(us) 00:22:35.514 [2024-11-26T04:18:37.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.514 [2024-11-26T04:18:37.282Z] =================================================================================================================== 00:22:35.514 [2024-11-26T04:18:37.282Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:35.514 04:18:37 -- common/autotest_common.sh@960 -- # wait 97504 00:22:35.774 04:18:37 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:22:35.774 04:18:37 -- host/digest.sh@77 -- # local rw bs qd 00:22:35.774 04:18:37 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:35.774 04:18:37 -- host/digest.sh@80 -- # rw=randwrite 00:22:35.774 04:18:37 -- host/digest.sh@80 -- # bs=4096 00:22:35.774 04:18:37 -- host/digest.sh@80 -- # qd=128 00:22:35.774 04:18:37 -- host/digest.sh@82 -- # bperfpid=97594 00:22:35.774 04:18:37 -- host/digest.sh@83 -- # waitforlisten 97594 /var/tmp/bperf.sock 00:22:35.774 04:18:37 -- common/autotest_common.sh@829 -- # '[' -z 97594 ']' 00:22:35.774 04:18:37 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:35.774 04:18:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:35.774 04:18:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:35.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:35.774 04:18:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:35.774 04:18:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:35.774 04:18:37 -- common/autotest_common.sh@10 -- # set +x 00:22:35.774 [2024-11-26 04:18:37.510206] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:35.774 [2024-11-26 04:18:37.510334] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97594 ] 00:22:36.033 [2024-11-26 04:18:37.642609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.033 [2024-11-26 04:18:37.700829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.969 04:18:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:36.969 04:18:38 -- common/autotest_common.sh@862 -- # return 0 00:22:36.969 04:18:38 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:36.969 04:18:38 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:36.969 04:18:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:36.969 04:18:38 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:36.969 04:18:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:37.227 nvme0n1 00:22:37.227 04:18:38 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:37.227 04:18:38 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:37.486 Running I/O for 2 seconds... 00:22:39.460 00:22:39.460 Latency(us) 00:22:39.460 [2024-11-26T04:18:41.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.460 [2024-11-26T04:18:41.228Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:39.460 nvme0n1 : 2.00 28661.06 111.96 0.00 0.00 4461.71 1861.82 9532.51 00:22:39.460 [2024-11-26T04:18:41.228Z] =================================================================================================================== 00:22:39.460 [2024-11-26T04:18:41.228Z] Total : 28661.06 111.96 0.00 0.00 4461.71 1861.82 9532.51 00:22:39.460 0 00:22:39.460 04:18:41 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:39.460 04:18:41 -- host/digest.sh@92 -- # get_accel_stats 00:22:39.460 04:18:41 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:39.460 04:18:41 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:39.460 | select(.opcode=="crc32c") 00:22:39.460 | "\(.module_name) \(.executed)"' 00:22:39.460 04:18:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:39.719 04:18:41 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:39.719 04:18:41 -- host/digest.sh@93 -- # exp_module=software 00:22:39.719 04:18:41 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:39.719 04:18:41 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:39.719 04:18:41 -- host/digest.sh@97 -- # killprocess 97594 00:22:39.719 04:18:41 -- common/autotest_common.sh@936 -- # '[' -z 97594 ']' 00:22:39.719 04:18:41 -- common/autotest_common.sh@940 -- # kill -0 97594 00:22:39.719 04:18:41 -- common/autotest_common.sh@941 -- # uname 00:22:39.719 04:18:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:39.719 04:18:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97594 00:22:39.719 04:18:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:39.719 04:18:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:39.719 killing process with pid 97594 00:22:39.719 04:18:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97594' 00:22:39.719 04:18:41 -- common/autotest_common.sh@955 -- # kill 97594 00:22:39.719 Received shutdown signal, test time was about 2.000000 seconds 00:22:39.719 00:22:39.719 Latency(us) 00:22:39.719 [2024-11-26T04:18:41.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.719 [2024-11-26T04:18:41.487Z] =================================================================================================================== 00:22:39.719 [2024-11-26T04:18:41.487Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:39.719 04:18:41 -- common/autotest_common.sh@960 -- # wait 97594 00:22:39.978 04:18:41 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:22:39.978 04:18:41 -- host/digest.sh@77 -- # local rw bs qd 00:22:39.978 04:18:41 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:39.978 04:18:41 -- host/digest.sh@80 -- # rw=randwrite 00:22:39.978 04:18:41 -- host/digest.sh@80 -- # bs=131072 00:22:39.978 04:18:41 -- host/digest.sh@80 -- # qd=16 00:22:39.978 04:18:41 -- host/digest.sh@82 -- # bperfpid=97680 00:22:39.978 04:18:41 -- host/digest.sh@83 -- # waitforlisten 97680 /var/tmp/bperf.sock 00:22:39.978 04:18:41 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:39.978 04:18:41 -- common/autotest_common.sh@829 -- # '[' -z 97680 ']' 00:22:39.978 04:18:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:39.978 04:18:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:39.978 04:18:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:39.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:39.978 04:18:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:39.978 04:18:41 -- common/autotest_common.sh@10 -- # set +x 00:22:39.978 [2024-11-26 04:18:41.614063] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:39.978 [2024-11-26 04:18:41.614190] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97680 ] 00:22:39.978 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:39.978 Zero copy mechanism will not be used. 00:22:40.237 [2024-11-26 04:18:41.746395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.237 [2024-11-26 04:18:41.804602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.805 04:18:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:40.805 04:18:42 -- common/autotest_common.sh@862 -- # return 0 00:22:40.805 04:18:42 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:40.805 04:18:42 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:40.805 04:18:42 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:41.064 04:18:42 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:41.064 04:18:42 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:41.323 nvme0n1 00:22:41.323 04:18:43 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:41.323 04:18:43 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:41.582 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:41.582 Zero copy mechanism will not be used. 00:22:41.582 Running I/O for 2 seconds... 00:22:43.486 00:22:43.486 Latency(us) 00:22:43.486 [2024-11-26T04:18:45.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.486 [2024-11-26T04:18:45.254Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:43.486 nvme0n1 : 2.00 8005.70 1000.71 0.00 0.00 1994.45 1660.74 11081.54 00:22:43.486 [2024-11-26T04:18:45.254Z] =================================================================================================================== 00:22:43.486 [2024-11-26T04:18:45.254Z] Total : 8005.70 1000.71 0.00 0.00 1994.45 1660.74 11081.54 00:22:43.486 0 00:22:43.486 04:18:45 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:43.486 04:18:45 -- host/digest.sh@92 -- # get_accel_stats 00:22:43.486 04:18:45 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:43.486 04:18:45 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:43.486 | select(.opcode=="crc32c") 00:22:43.486 | "\(.module_name) \(.executed)"' 00:22:43.486 04:18:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:43.745 04:18:45 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:43.745 04:18:45 -- host/digest.sh@93 -- # exp_module=software 00:22:43.745 04:18:45 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:43.745 04:18:45 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:43.745 04:18:45 -- host/digest.sh@97 -- # killprocess 97680 00:22:43.745 04:18:45 -- common/autotest_common.sh@936 -- # '[' -z 97680 ']' 00:22:43.745 04:18:45 -- common/autotest_common.sh@940 -- # kill -0 97680 00:22:43.745 04:18:45 -- common/autotest_common.sh@941 -- # uname 00:22:43.745 04:18:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:43.745 04:18:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97680 00:22:43.745 04:18:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:43.745 04:18:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:43.745 killing process with pid 97680 00:22:43.745 04:18:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97680' 00:22:43.745 Received shutdown signal, test time was about 2.000000 seconds 00:22:43.745 00:22:43.745 Latency(us) 00:22:43.745 [2024-11-26T04:18:45.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.745 [2024-11-26T04:18:45.513Z] =================================================================================================================== 00:22:43.745 [2024-11-26T04:18:45.513Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:43.745 04:18:45 -- common/autotest_common.sh@955 -- # kill 97680 00:22:43.745 04:18:45 -- common/autotest_common.sh@960 -- # wait 97680 00:22:44.004 04:18:45 -- host/digest.sh@126 -- # killprocess 97377 00:22:44.004 04:18:45 -- common/autotest_common.sh@936 -- # '[' -z 97377 ']' 00:22:44.004 04:18:45 -- common/autotest_common.sh@940 -- # kill -0 97377 00:22:44.004 04:18:45 -- common/autotest_common.sh@941 -- # uname 00:22:44.004 04:18:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:44.004 04:18:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97377 00:22:44.005 04:18:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:44.005 04:18:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:44.005 killing process with pid 97377 00:22:44.005 04:18:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97377' 00:22:44.005 04:18:45 -- common/autotest_common.sh@955 -- # kill 97377 00:22:44.005 04:18:45 -- common/autotest_common.sh@960 -- # wait 97377 00:22:44.264 00:22:44.264 real 0m17.434s 00:22:44.264 user 0m31.345s 00:22:44.264 sys 0m5.426s 00:22:44.264 04:18:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:44.264 04:18:45 -- common/autotest_common.sh@10 -- # set +x 00:22:44.264 ************************************ 00:22:44.264 END TEST nvmf_digest_clean 00:22:44.264 ************************************ 00:22:44.264 04:18:46 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:22:44.264 04:18:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:44.264 04:18:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:44.264 04:18:46 -- common/autotest_common.sh@10 -- # set +x 00:22:44.264 ************************************ 00:22:44.264 START TEST nvmf_digest_error 00:22:44.264 ************************************ 00:22:44.264 04:18:46 -- common/autotest_common.sh@1114 -- # run_digest_error 00:22:44.264 04:18:46 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:22:44.264 04:18:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:44.264 04:18:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:44.264 04:18:46 -- common/autotest_common.sh@10 -- # set +x 00:22:44.523 04:18:46 -- nvmf/common.sh@469 -- # nvmfpid=97799 00:22:44.523 04:18:46 -- nvmf/common.sh@470 -- # waitforlisten 97799 00:22:44.523 04:18:46 -- common/autotest_common.sh@829 -- # '[' -z 97799 ']' 00:22:44.523 04:18:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:44.523 04:18:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.523 04:18:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:44.523 04:18:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.523 04:18:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:44.523 04:18:46 -- common/autotest_common.sh@10 -- # set +x 00:22:44.523 [2024-11-26 04:18:46.073665] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:44.523 [2024-11-26 04:18:46.073768] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.523 [2024-11-26 04:18:46.197931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.523 [2024-11-26 04:18:46.270920] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:44.523 [2024-11-26 04:18:46.271055] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.523 [2024-11-26 04:18:46.271067] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.523 [2024-11-26 04:18:46.271074] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.523 [2024-11-26 04:18:46.271098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.783 04:18:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:44.783 04:18:46 -- common/autotest_common.sh@862 -- # return 0 00:22:44.783 04:18:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:44.783 04:18:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:44.783 04:18:46 -- common/autotest_common.sh@10 -- # set +x 00:22:44.783 04:18:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.783 04:18:46 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:44.783 04:18:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.783 04:18:46 -- common/autotest_common.sh@10 -- # set +x 00:22:44.783 [2024-11-26 04:18:46.371517] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:44.783 04:18:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.783 04:18:46 -- host/digest.sh@104 -- # common_target_config 00:22:44.783 04:18:46 -- host/digest.sh@43 -- # rpc_cmd 00:22:44.783 04:18:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.783 04:18:46 -- common/autotest_common.sh@10 -- # set +x 00:22:44.783 null0 00:22:44.783 [2024-11-26 04:18:46.502150] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.783 [2024-11-26 04:18:46.526304] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.783 04:18:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.783 04:18:46 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:22:44.783 04:18:46 -- host/digest.sh@54 -- # local rw bs qd 00:22:44.783 04:18:46 -- host/digest.sh@56 -- # rw=randread 00:22:44.783 04:18:46 -- host/digest.sh@56 -- # bs=4096 00:22:44.783 04:18:46 -- host/digest.sh@56 -- # qd=128 00:22:44.783 04:18:46 -- host/digest.sh@58 -- # bperfpid=97824 00:22:44.783 04:18:46 -- host/digest.sh@60 -- # waitforlisten 97824 /var/tmp/bperf.sock 00:22:44.783 04:18:46 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:44.783 04:18:46 -- common/autotest_common.sh@829 -- # '[' -z 97824 ']' 00:22:44.783 04:18:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:44.783 04:18:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:44.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:44.783 04:18:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:44.783 04:18:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:44.783 04:18:46 -- common/autotest_common.sh@10 -- # set +x 00:22:45.042 [2024-11-26 04:18:46.584121] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:45.042 [2024-11-26 04:18:46.584213] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97824 ] 00:22:45.042 [2024-11-26 04:18:46.723154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.042 [2024-11-26 04:18:46.779313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.977 04:18:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:45.977 04:18:47 -- common/autotest_common.sh@862 -- # return 0 00:22:45.977 04:18:47 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:45.977 04:18:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:46.236 04:18:47 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:46.237 04:18:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.237 04:18:47 -- common/autotest_common.sh@10 -- # set +x 00:22:46.237 04:18:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.237 04:18:47 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:46.237 04:18:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:46.495 nvme0n1 00:22:46.495 04:18:48 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:46.495 04:18:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.495 04:18:48 -- common/autotest_common.sh@10 -- # set +x 00:22:46.495 04:18:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.495 04:18:48 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:46.495 04:18:48 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:46.495 Running I/O for 2 seconds... 00:22:46.755 [2024-11-26 04:18:48.275704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:46.755 [2024-11-26 04:18:48.275778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.755 [2024-11-26 04:18:48.275792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.755 [2024-11-26 04:18:48.286944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:46.755 [2024-11-26 04:18:48.286982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.755 [2024-11-26 04:18:48.287009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.755 [2024-11-26 04:18:48.296891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:46.755 [2024-11-26 04:18:48.296931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.755 [2024-11-26 04:18:48.296957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.755 [2024-11-26 04:18:48.308580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:46.755 [2024-11-26 04:18:48.308618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.755 [2024-11-26 04:18:48.308645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.755 [2024-11-26 04:18:48.321026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:46.755 [2024-11-26 04:18:48.321064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.755 [2024-11-26 04:18:48.321091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.755 [2024-11-26 04:18:48.333008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:46.755 [2024-11-26 04:18:48.333045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.755 [2024-11-26 04:18:48.333072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.755 [2024-11-26 04:18:48.341562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:46.755 [2024-11-26 04:18:48.341600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.755 [2024-11-26 04:18:48.341626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.755 [2024-11-26 04:18:48.352966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:46.755 [2024-11-26 04:18:48.353003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.755 [2024-11-26 04:18:48.353029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.755 [2024-11-26 04:18:48.364075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:46.755 [2024-11-26 04:18:48.364111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.755 [2024-11-26 04:18:48.364137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.755 [2024-11-26 04:18:48.376183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:46.755 [2024-11-26 04:18:48.376221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.755 [2024-11-26 04:18:48.376248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.755 [2024-11-26 04:18:48.388102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:46.755 [2024-11-26 04:18:48.388139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.755 [2024-11-26 04:18:48.388166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.755 [2024-11-26 04:18:48.400779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:46.755 [2024-11-26 04:18:48.400817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.755 [2024-11-26 04:18:48.400844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.755 [2024-11-26 04:18:48.413339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:46.755 [2024-11-26 04:18:48.413393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.755 [2024-11-26 04:18:48.413420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.755 [2024-11-26 04:18:48.425873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:46.755 [2024-11-26 04:18:48.425927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.755 [2024-11-26 04:18:48.425953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.755 [2024-11-26 04:18:48.435132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:46.755 [2024-11-26 04:18:48.435169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.755 [2024-11-26 04:18:48.435195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.755 [2024-11-26 04:18:48.445664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:46.755 [2024-11-26 04:18:48.445701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.755 [2024-11-26 04:18:48.445751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.755 [2024-11-26 04:18:48.454601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:46.755 [2024-11-26 04:18:48.454638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.755 [2024-11-26 04:18:48.454665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.755 [2024-11-26 04:18:48.465044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:46.755 [2024-11-26 04:18:48.465080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.755 [2024-11-26 04:18:48.465107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.755 [2024-11-26 04:18:48.477769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:46.755 [2024-11-26 04:18:48.477821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.755 [2024-11-26 04:18:48.477848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.755 [2024-11-26 04:18:48.490072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:46.755 [2024-11-26 04:18:48.490126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.755 [2024-11-26 04:18:48.490153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.755 [2024-11-26 04:18:48.499652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:46.755 [2024-11-26 04:18:48.499690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.755 [2024-11-26 04:18:48.499716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.755 [2024-11-26 04:18:48.508541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:46.755 [2024-11-26 04:18:48.508578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.755 [2024-11-26 04:18:48.508605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.014 [2024-11-26 04:18:48.519517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.014 [2024-11-26 04:18:48.519555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.014 [2024-11-26 04:18:48.519581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.014 [2024-11-26 04:18:48.531750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.014 [2024-11-26 04:18:48.531787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.014 [2024-11-26 04:18:48.531814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.014 [2024-11-26 04:18:48.543872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.014 [2024-11-26 04:18:48.543908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.014 [2024-11-26 04:18:48.543935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.014 [2024-11-26 04:18:48.552793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.014 [2024-11-26 04:18:48.552830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.014 [2024-11-26 04:18:48.552856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.014 [2024-11-26 04:18:48.561730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.014 [2024-11-26 04:18:48.561781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.014 [2024-11-26 04:18:48.561808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.014 [2024-11-26 04:18:48.573939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.014 [2024-11-26 04:18:48.574014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.014 [2024-11-26 04:18:48.574027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.014 [2024-11-26 04:18:48.585908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.014 [2024-11-26 04:18:48.585960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.014 [2024-11-26 04:18:48.585986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.014 [2024-11-26 04:18:48.597885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.014 [2024-11-26 04:18:48.597939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.014 [2024-11-26 04:18:48.597966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.014 [2024-11-26 04:18:48.606468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.014 [2024-11-26 04:18:48.606504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.014 [2024-11-26 04:18:48.606530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.014 [2024-11-26 04:18:48.618270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.014 [2024-11-26 04:18:48.618307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.014 [2024-11-26 04:18:48.618333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.014 [2024-11-26 04:18:48.630621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.014 [2024-11-26 04:18:48.630659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.014 [2024-11-26 04:18:48.630685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.014 [2024-11-26 04:18:48.643167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.014 [2024-11-26 04:18:48.643204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.014 [2024-11-26 04:18:48.643229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.014 [2024-11-26 04:18:48.654193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.014 [2024-11-26 04:18:48.654245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.014 [2024-11-26 04:18:48.654272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.014 [2024-11-26 04:18:48.667055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.014 [2024-11-26 04:18:48.667093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.014 [2024-11-26 04:18:48.667119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.014 [2024-11-26 04:18:48.677576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.015 [2024-11-26 04:18:48.677614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.015 [2024-11-26 04:18:48.677640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.015 [2024-11-26 04:18:48.686890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.015 [2024-11-26 04:18:48.686928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.015 [2024-11-26 04:18:48.686954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.015 [2024-11-26 04:18:48.696260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.015 [2024-11-26 04:18:48.696298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.015 [2024-11-26 04:18:48.696324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.015 [2024-11-26 04:18:48.708310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.015 [2024-11-26 04:18:48.708347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.015 [2024-11-26 04:18:48.708373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.015 [2024-11-26 04:18:48.717728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.015 [2024-11-26 04:18:48.717779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.015 [2024-11-26 04:18:48.717806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.015 [2024-11-26 04:18:48.729329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.015 [2024-11-26 04:18:48.729367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.015 [2024-11-26 04:18:48.729394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.015 [2024-11-26 04:18:48.739639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.015 [2024-11-26 04:18:48.739677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.015 [2024-11-26 04:18:48.739704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.015 [2024-11-26 04:18:48.750784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.015 [2024-11-26 04:18:48.750820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.015 [2024-11-26 04:18:48.750847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.015 [2024-11-26 04:18:48.762422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.015 [2024-11-26 04:18:48.762461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.015 [2024-11-26 04:18:48.762487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.015 [2024-11-26 04:18:48.772441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.015 [2024-11-26 04:18:48.772478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.015 [2024-11-26 04:18:48.772505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.272 [2024-11-26 04:18:48.785530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.272 [2024-11-26 04:18:48.785568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.272 [2024-11-26 04:18:48.785595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.272 [2024-11-26 04:18:48.797335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.272 [2024-11-26 04:18:48.797373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.272 [2024-11-26 04:18:48.797400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.272 [2024-11-26 04:18:48.808654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.272 [2024-11-26 04:18:48.808691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.272 [2024-11-26 04:18:48.808718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.272 [2024-11-26 04:18:48.819579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.272 [2024-11-26 04:18:48.819618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.272 [2024-11-26 04:18:48.819645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.272 [2024-11-26 04:18:48.828126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.272 [2024-11-26 04:18:48.828164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.272 [2024-11-26 04:18:48.828190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.272 [2024-11-26 04:18:48.839594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.272 [2024-11-26 04:18:48.839632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.272 [2024-11-26 04:18:48.839658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.272 [2024-11-26 04:18:48.851283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.272 [2024-11-26 04:18:48.851320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.272 [2024-11-26 04:18:48.851346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.272 [2024-11-26 04:18:48.864002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.272 [2024-11-26 04:18:48.864040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.272 [2024-11-26 04:18:48.864067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.272 [2024-11-26 04:18:48.875977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.272 [2024-11-26 04:18:48.876014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.272 [2024-11-26 04:18:48.876040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.272 [2024-11-26 04:18:48.887219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.272 [2024-11-26 04:18:48.887255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.272 [2024-11-26 04:18:48.887282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.272 [2024-11-26 04:18:48.896289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.272 [2024-11-26 04:18:48.896326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.272 [2024-11-26 04:18:48.896352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.272 [2024-11-26 04:18:48.908063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.272 [2024-11-26 04:18:48.908115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.272 [2024-11-26 04:18:48.908142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.272 [2024-11-26 04:18:48.920797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.272 [2024-11-26 04:18:48.920835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.272 [2024-11-26 04:18:48.920861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.272 [2024-11-26 04:18:48.932582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.272 [2024-11-26 04:18:48.932620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.272 [2024-11-26 04:18:48.932646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.272 [2024-11-26 04:18:48.944799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.272 [2024-11-26 04:18:48.944836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.272 [2024-11-26 04:18:48.944862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.272 [2024-11-26 04:18:48.953261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.272 [2024-11-26 04:18:48.953298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.272 [2024-11-26 04:18:48.953323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.272 [2024-11-26 04:18:48.965320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.272 [2024-11-26 04:18:48.965358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.272 [2024-11-26 04:18:48.965385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.272 [2024-11-26 04:18:48.977837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.272 [2024-11-26 04:18:48.977891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.272 [2024-11-26 04:18:48.977918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.272 [2024-11-26 04:18:48.990671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.272 [2024-11-26 04:18:48.990763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.272 [2024-11-26 04:18:48.990777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.272 [2024-11-26 04:18:49.001796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.272 [2024-11-26 04:18:49.001831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.272 [2024-11-26 04:18:49.001858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.273 [2024-11-26 04:18:49.014457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.273 [2024-11-26 04:18:49.014495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.273 [2024-11-26 04:18:49.014522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.273 [2024-11-26 04:18:49.027318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.273 [2024-11-26 04:18:49.027357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.273 [2024-11-26 04:18:49.027383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.542 [2024-11-26 04:18:49.039267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.542 [2024-11-26 04:18:49.039305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.542 [2024-11-26 04:18:49.039331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.542 [2024-11-26 04:18:49.048464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.542 [2024-11-26 04:18:49.048501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.542 [2024-11-26 04:18:49.048528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.542 [2024-11-26 04:18:49.058689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.542 [2024-11-26 04:18:49.058737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.542 [2024-11-26 04:18:49.058765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.542 [2024-11-26 04:18:49.070237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.542 [2024-11-26 04:18:49.070303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.542 [2024-11-26 04:18:49.070329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.542 [2024-11-26 04:18:49.080839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.542 [2024-11-26 04:18:49.080876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.542 [2024-11-26 04:18:49.080902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.542 [2024-11-26 04:18:49.089851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.542 [2024-11-26 04:18:49.089904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.542 [2024-11-26 04:18:49.089931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.542 [2024-11-26 04:18:49.098857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.542 [2024-11-26 04:18:49.098893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.542 [2024-11-26 04:18:49.098919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.542 [2024-11-26 04:18:49.109328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.542 [2024-11-26 04:18:49.109381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.542 [2024-11-26 04:18:49.109394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.542 [2024-11-26 04:18:49.122371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.542 [2024-11-26 04:18:49.122426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.542 [2024-11-26 04:18:49.122439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.542 [2024-11-26 04:18:49.132233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.542 [2024-11-26 04:18:49.132289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.542 [2024-11-26 04:18:49.132300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.542 [2024-11-26 04:18:49.144138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.542 [2024-11-26 04:18:49.144176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.542 [2024-11-26 04:18:49.144203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.542 [2024-11-26 04:18:49.156948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.542 [2024-11-26 04:18:49.156986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.542 [2024-11-26 04:18:49.157013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.542 [2024-11-26 04:18:49.168366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.542 [2024-11-26 04:18:49.168403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.542 [2024-11-26 04:18:49.168431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.542 [2024-11-26 04:18:49.180416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.542 [2024-11-26 04:18:49.180454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.542 [2024-11-26 04:18:49.180481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.542 [2024-11-26 04:18:49.193056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.542 [2024-11-26 04:18:49.193093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.542 [2024-11-26 04:18:49.193104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.542 [2024-11-26 04:18:49.201086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.542 [2024-11-26 04:18:49.201124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.543 [2024-11-26 04:18:49.201150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.543 [2024-11-26 04:18:49.213272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.543 [2024-11-26 04:18:49.213310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.543 [2024-11-26 04:18:49.213336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.543 [2024-11-26 04:18:49.225231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.543 [2024-11-26 04:18:49.225286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.543 [2024-11-26 04:18:49.225312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.543 [2024-11-26 04:18:49.238595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.543 [2024-11-26 04:18:49.238650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.543 [2024-11-26 04:18:49.238677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.543 [2024-11-26 04:18:49.251836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.543 [2024-11-26 04:18:49.251890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.543 [2024-11-26 04:18:49.251917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.543 [2024-11-26 04:18:49.264806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.543 [2024-11-26 04:18:49.264860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.543 [2024-11-26 04:18:49.264888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.543 [2024-11-26 04:18:49.277190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.543 [2024-11-26 04:18:49.277243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.543 [2024-11-26 04:18:49.277271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.543 [2024-11-26 04:18:49.285547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.543 [2024-11-26 04:18:49.285597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.543 [2024-11-26 04:18:49.285624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.543 [2024-11-26 04:18:49.298072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.543 [2024-11-26 04:18:49.298125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.543 [2024-11-26 04:18:49.298153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.802 [2024-11-26 04:18:49.310560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.802 [2024-11-26 04:18:49.310613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.802 [2024-11-26 04:18:49.310639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.802 [2024-11-26 04:18:49.322308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.802 [2024-11-26 04:18:49.322377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.802 [2024-11-26 04:18:49.322404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.802 [2024-11-26 04:18:49.335278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.802 [2024-11-26 04:18:49.335332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.802 [2024-11-26 04:18:49.335360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.802 [2024-11-26 04:18:49.345759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.802 [2024-11-26 04:18:49.345812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.802 [2024-11-26 04:18:49.345839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.802 [2024-11-26 04:18:49.356274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.802 [2024-11-26 04:18:49.356328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.802 [2024-11-26 04:18:49.356355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.802 [2024-11-26 04:18:49.366211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.802 [2024-11-26 04:18:49.366266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.802 [2024-11-26 04:18:49.366293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.802 [2024-11-26 04:18:49.376096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.802 [2024-11-26 04:18:49.376149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.802 [2024-11-26 04:18:49.376176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.802 [2024-11-26 04:18:49.386687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.802 [2024-11-26 04:18:49.386750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.802 [2024-11-26 04:18:49.386776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.802 [2024-11-26 04:18:49.396094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.803 [2024-11-26 04:18:49.396146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.803 [2024-11-26 04:18:49.396172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.803 [2024-11-26 04:18:49.405561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.803 [2024-11-26 04:18:49.405599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.803 [2024-11-26 04:18:49.405625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.803 [2024-11-26 04:18:49.415111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.803 [2024-11-26 04:18:49.415149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.803 [2024-11-26 04:18:49.415175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.803 [2024-11-26 04:18:49.424618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.803 [2024-11-26 04:18:49.424656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.803 [2024-11-26 04:18:49.424682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.803 [2024-11-26 04:18:49.436420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.803 [2024-11-26 04:18:49.436458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.803 [2024-11-26 04:18:49.436484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.803 [2024-11-26 04:18:49.446148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.803 [2024-11-26 04:18:49.446200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.803 [2024-11-26 04:18:49.446227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.803 [2024-11-26 04:18:49.456593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.803 [2024-11-26 04:18:49.456630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.803 [2024-11-26 04:18:49.456657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.803 [2024-11-26 04:18:49.467403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.803 [2024-11-26 04:18:49.467440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.803 [2024-11-26 04:18:49.467466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.803 [2024-11-26 04:18:49.475478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.803 [2024-11-26 04:18:49.475511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.803 [2024-11-26 04:18:49.475537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.803 [2024-11-26 04:18:49.486793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.803 [2024-11-26 04:18:49.486825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.803 [2024-11-26 04:18:49.486851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.803 [2024-11-26 04:18:49.498320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.803 [2024-11-26 04:18:49.498353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.803 [2024-11-26 04:18:49.498379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.803 [2024-11-26 04:18:49.510752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.803 [2024-11-26 04:18:49.510784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.803 [2024-11-26 04:18:49.510810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.803 [2024-11-26 04:18:49.519255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.803 [2024-11-26 04:18:49.519288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.803 [2024-11-26 04:18:49.519314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.803 [2024-11-26 04:18:49.531184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.803 [2024-11-26 04:18:49.531221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.803 [2024-11-26 04:18:49.531246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.803 [2024-11-26 04:18:49.543743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.803 [2024-11-26 04:18:49.543776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.803 [2024-11-26 04:18:49.543801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.803 [2024-11-26 04:18:49.555511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:47.803 [2024-11-26 04:18:49.555544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.803 [2024-11-26 04:18:49.555571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.062 [2024-11-26 04:18:49.568217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.062 [2024-11-26 04:18:49.568262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.062 [2024-11-26 04:18:49.568288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.062 [2024-11-26 04:18:49.578638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.062 [2024-11-26 04:18:49.578672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.062 [2024-11-26 04:18:49.578698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.063 [2024-11-26 04:18:49.590000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.063 [2024-11-26 04:18:49.590048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.063 [2024-11-26 04:18:49.590074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.063 [2024-11-26 04:18:49.599151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.063 [2024-11-26 04:18:49.599185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.063 [2024-11-26 04:18:49.599211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.063 [2024-11-26 04:18:49.609103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.063 [2024-11-26 04:18:49.609135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.063 [2024-11-26 04:18:49.609162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.063 [2024-11-26 04:18:49.619156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.063 [2024-11-26 04:18:49.619194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.063 [2024-11-26 04:18:49.619220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.063 [2024-11-26 04:18:49.628159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.063 [2024-11-26 04:18:49.628192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.063 [2024-11-26 04:18:49.628218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.063 [2024-11-26 04:18:49.636332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.063 [2024-11-26 04:18:49.636364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.063 [2024-11-26 04:18:49.636390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.063 [2024-11-26 04:18:49.647553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.063 [2024-11-26 04:18:49.647587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.063 [2024-11-26 04:18:49.647614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.063 [2024-11-26 04:18:49.659066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.063 [2024-11-26 04:18:49.659099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.063 [2024-11-26 04:18:49.659125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.063 [2024-11-26 04:18:49.669403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.063 [2024-11-26 04:18:49.669437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.063 [2024-11-26 04:18:49.669463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.063 [2024-11-26 04:18:49.679856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.063 [2024-11-26 04:18:49.679888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.063 [2024-11-26 04:18:49.679915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.063 [2024-11-26 04:18:49.689274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.063 [2024-11-26 04:18:49.689307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.063 [2024-11-26 04:18:49.689334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.063 [2024-11-26 04:18:49.698237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.063 [2024-11-26 04:18:49.698302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.063 [2024-11-26 04:18:49.698328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.063 [2024-11-26 04:18:49.707615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.063 [2024-11-26 04:18:49.707649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.063 [2024-11-26 04:18:49.707676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.063 [2024-11-26 04:18:49.718038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.063 [2024-11-26 04:18:49.718074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.063 [2024-11-26 04:18:49.718100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.063 [2024-11-26 04:18:49.729937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.063 [2024-11-26 04:18:49.729990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.063 [2024-11-26 04:18:49.730008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.063 [2024-11-26 04:18:49.742669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.063 [2024-11-26 04:18:49.742745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.063 [2024-11-26 04:18:49.742758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.063 [2024-11-26 04:18:49.754910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.063 [2024-11-26 04:18:49.754958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.063 [2024-11-26 04:18:49.754984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.063 [2024-11-26 04:18:49.764086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.063 [2024-11-26 04:18:49.764135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.063 [2024-11-26 04:18:49.764161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.063 [2024-11-26 04:18:49.774501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.063 [2024-11-26 04:18:49.774534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.063 [2024-11-26 04:18:49.774561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.063 [2024-11-26 04:18:49.782399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.063 [2024-11-26 04:18:49.782431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.063 [2024-11-26 04:18:49.782457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.063 [2024-11-26 04:18:49.791512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.063 [2024-11-26 04:18:49.791544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.063 [2024-11-26 04:18:49.791571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.063 [2024-11-26 04:18:49.801756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.064 [2024-11-26 04:18:49.801803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.064 [2024-11-26 04:18:49.801830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.064 [2024-11-26 04:18:49.810889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.064 [2024-11-26 04:18:49.810940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.064 [2024-11-26 04:18:49.810966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.064 [2024-11-26 04:18:49.819780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.064 [2024-11-26 04:18:49.819812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.064 [2024-11-26 04:18:49.819838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.323 [2024-11-26 04:18:49.830161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.323 [2024-11-26 04:18:49.830208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.323 [2024-11-26 04:18:49.830235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.323 [2024-11-26 04:18:49.842885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.323 [2024-11-26 04:18:49.842923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.323 [2024-11-26 04:18:49.842950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.323 [2024-11-26 04:18:49.855017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.323 [2024-11-26 04:18:49.855051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.323 [2024-11-26 04:18:49.855077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.323 [2024-11-26 04:18:49.867834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.323 [2024-11-26 04:18:49.867867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.323 [2024-11-26 04:18:49.867893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.323 [2024-11-26 04:18:49.876148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.323 [2024-11-26 04:18:49.876181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.323 [2024-11-26 04:18:49.876207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.323 [2024-11-26 04:18:49.888303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.323 [2024-11-26 04:18:49.888355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.323 [2024-11-26 04:18:49.888382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.323 [2024-11-26 04:18:49.901145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.323 [2024-11-26 04:18:49.901178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.323 [2024-11-26 04:18:49.901205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.323 [2024-11-26 04:18:49.912113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.323 [2024-11-26 04:18:49.912146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.323 [2024-11-26 04:18:49.912171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.323 [2024-11-26 04:18:49.922287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.323 [2024-11-26 04:18:49.922319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.323 [2024-11-26 04:18:49.922346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.323 [2024-11-26 04:18:49.931900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.323 [2024-11-26 04:18:49.931936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.323 [2024-11-26 04:18:49.931962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.323 [2024-11-26 04:18:49.940526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.323 [2024-11-26 04:18:49.940559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.323 [2024-11-26 04:18:49.940585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.323 [2024-11-26 04:18:49.951376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.323 [2024-11-26 04:18:49.951409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.323 [2024-11-26 04:18:49.951435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.323 [2024-11-26 04:18:49.962790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.323 [2024-11-26 04:18:49.962823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.323 [2024-11-26 04:18:49.962849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.323 [2024-11-26 04:18:49.971245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.324 [2024-11-26 04:18:49.971279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.324 [2024-11-26 04:18:49.971305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.324 [2024-11-26 04:18:49.980892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.324 [2024-11-26 04:18:49.980924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.324 [2024-11-26 04:18:49.980950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.324 [2024-11-26 04:18:49.991097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.324 [2024-11-26 04:18:49.991130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.324 [2024-11-26 04:18:49.991156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.324 [2024-11-26 04:18:50.004341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.324 [2024-11-26 04:18:50.004394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.324 [2024-11-26 04:18:50.004421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.324 [2024-11-26 04:18:50.016538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.324 [2024-11-26 04:18:50.016592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.324 [2024-11-26 04:18:50.016619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.324 [2024-11-26 04:18:50.031378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.324 [2024-11-26 04:18:50.031445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.324 [2024-11-26 04:18:50.031471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.324 [2024-11-26 04:18:50.041744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.324 [2024-11-26 04:18:50.041796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.324 [2024-11-26 04:18:50.041823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.324 [2024-11-26 04:18:50.053893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.324 [2024-11-26 04:18:50.053944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.324 [2024-11-26 04:18:50.053971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.324 [2024-11-26 04:18:50.064623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.324 [2024-11-26 04:18:50.064658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.324 [2024-11-26 04:18:50.064685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.324 [2024-11-26 04:18:50.075910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.324 [2024-11-26 04:18:50.075949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.324 [2024-11-26 04:18:50.075975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.583 [2024-11-26 04:18:50.085751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.583 [2024-11-26 04:18:50.085802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.583 [2024-11-26 04:18:50.085829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.583 [2024-11-26 04:18:50.097566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.584 [2024-11-26 04:18:50.097605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.584 [2024-11-26 04:18:50.097631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.584 [2024-11-26 04:18:50.108497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.584 [2024-11-26 04:18:50.108536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.584 [2024-11-26 04:18:50.108562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.584 [2024-11-26 04:18:50.118455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.584 [2024-11-26 04:18:50.118492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.584 [2024-11-26 04:18:50.118518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.584 [2024-11-26 04:18:50.128261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.584 [2024-11-26 04:18:50.128298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.584 [2024-11-26 04:18:50.128324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.584 [2024-11-26 04:18:50.138732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.584 [2024-11-26 04:18:50.138767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.584 [2024-11-26 04:18:50.138794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.584 [2024-11-26 04:18:50.149085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.584 [2024-11-26 04:18:50.149121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.584 [2024-11-26 04:18:50.149147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.584 [2024-11-26 04:18:50.160912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.584 [2024-11-26 04:18:50.160948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.584 [2024-11-26 04:18:50.160975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.584 [2024-11-26 04:18:50.172605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.584 [2024-11-26 04:18:50.172642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.584 [2024-11-26 04:18:50.172669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.584 [2024-11-26 04:18:50.182677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.584 [2024-11-26 04:18:50.182736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.584 [2024-11-26 04:18:50.182748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.584 [2024-11-26 04:18:50.192890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.584 [2024-11-26 04:18:50.192923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.584 [2024-11-26 04:18:50.192949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.584 [2024-11-26 04:18:50.202292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.584 [2024-11-26 04:18:50.202328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.584 [2024-11-26 04:18:50.202354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.584 [2024-11-26 04:18:50.213291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.584 [2024-11-26 04:18:50.213324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.584 [2024-11-26 04:18:50.213351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.584 [2024-11-26 04:18:50.222772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.584 [2024-11-26 04:18:50.222805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.584 [2024-11-26 04:18:50.222830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.584 [2024-11-26 04:18:50.234222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.584 [2024-11-26 04:18:50.234283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.584 [2024-11-26 04:18:50.234309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.584 [2024-11-26 04:18:50.245639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.584 [2024-11-26 04:18:50.245673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.584 [2024-11-26 04:18:50.245699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.584 [2024-11-26 04:18:50.253693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9de8d0) 00:22:48.584 [2024-11-26 04:18:50.253751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.584 [2024-11-26 04:18:50.253784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.584 00:22:48.584 Latency(us) 00:22:48.584 [2024-11-26T04:18:50.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.584 [2024-11-26T04:18:50.352Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:48.584 nvme0n1 : 2.04 22769.79 88.94 0.00 0.00 5517.05 2204.39 49569.05 00:22:48.584 [2024-11-26T04:18:50.352Z] =================================================================================================================== 00:22:48.584 [2024-11-26T04:18:50.352Z] Total : 22769.79 88.94 0.00 0.00 5517.05 2204.39 49569.05 00:22:48.584 0 00:22:48.584 04:18:50 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:48.584 04:18:50 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:48.584 04:18:50 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:48.584 | .driver_specific 00:22:48.584 | .nvme_error 00:22:48.584 | .status_code 00:22:48.584 | .command_transient_transport_error' 00:22:48.584 04:18:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:48.843 04:18:50 -- host/digest.sh@71 -- # (( 182 > 0 )) 00:22:48.843 04:18:50 -- host/digest.sh@73 -- # killprocess 97824 00:22:48.843 04:18:50 -- common/autotest_common.sh@936 -- # '[' -z 97824 ']' 00:22:48.843 04:18:50 -- common/autotest_common.sh@940 -- # kill -0 97824 00:22:48.843 04:18:50 -- common/autotest_common.sh@941 -- # uname 00:22:48.843 04:18:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:48.843 04:18:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97824 00:22:49.102 04:18:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:49.102 04:18:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:49.102 killing process with pid 97824 00:22:49.102 04:18:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97824' 00:22:49.102 Received shutdown signal, test time was about 2.000000 seconds 00:22:49.102 00:22:49.102 Latency(us) 00:22:49.102 [2024-11-26T04:18:50.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.102 [2024-11-26T04:18:50.870Z] =================================================================================================================== 00:22:49.102 [2024-11-26T04:18:50.870Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:49.102 04:18:50 -- common/autotest_common.sh@955 -- # kill 97824 00:22:49.102 04:18:50 -- common/autotest_common.sh@960 -- # wait 97824 00:22:49.102 04:18:50 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:22:49.102 04:18:50 -- host/digest.sh@54 -- # local rw bs qd 00:22:49.102 04:18:50 -- host/digest.sh@56 -- # rw=randread 00:22:49.102 04:18:50 -- host/digest.sh@56 -- # bs=131072 00:22:49.102 04:18:50 -- host/digest.sh@56 -- # qd=16 00:22:49.102 04:18:50 -- host/digest.sh@58 -- # bperfpid=97914 00:22:49.102 04:18:50 -- host/digest.sh@60 -- # waitforlisten 97914 /var/tmp/bperf.sock 00:22:49.102 04:18:50 -- common/autotest_common.sh@829 -- # '[' -z 97914 ']' 00:22:49.102 04:18:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:49.102 04:18:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:49.102 04:18:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:49.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:49.102 04:18:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:49.102 04:18:50 -- common/autotest_common.sh@10 -- # set +x 00:22:49.102 04:18:50 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:49.102 [2024-11-26 04:18:50.863250] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:49.102 [2024-11-26 04:18:50.863377] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97914 ] 00:22:49.102 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:49.102 Zero copy mechanism will not be used. 00:22:49.361 [2024-11-26 04:18:51.002514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.361 [2024-11-26 04:18:51.058946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.297 04:18:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:50.298 04:18:51 -- common/autotest_common.sh@862 -- # return 0 00:22:50.298 04:18:51 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:50.298 04:18:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:50.556 04:18:52 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:50.556 04:18:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.556 04:18:52 -- common/autotest_common.sh@10 -- # set +x 00:22:50.556 04:18:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.556 04:18:52 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:50.556 04:18:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:50.815 nvme0n1 00:22:50.815 04:18:52 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:50.815 04:18:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.815 04:18:52 -- common/autotest_common.sh@10 -- # set +x 00:22:50.815 04:18:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.815 04:18:52 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:50.815 04:18:52 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:50.815 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:50.815 Zero copy mechanism will not be used. 00:22:50.815 Running I/O for 2 seconds... 00:22:50.815 [2024-11-26 04:18:52.543635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:50.815 [2024-11-26 04:18:52.543686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-26 04:18:52.543718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.815 [2024-11-26 04:18:52.547576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:50.815 [2024-11-26 04:18:52.547611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-26 04:18:52.547638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.815 [2024-11-26 04:18:52.551292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:50.815 [2024-11-26 04:18:52.551329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-26 04:18:52.551357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.815 [2024-11-26 04:18:52.555323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:50.815 [2024-11-26 04:18:52.555360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-26 04:18:52.555387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.815 [2024-11-26 04:18:52.559231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:50.815 [2024-11-26 04:18:52.559269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-26 04:18:52.559296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.815 [2024-11-26 04:18:52.563370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:50.815 [2024-11-26 04:18:52.563408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-26 04:18:52.563436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.815 [2024-11-26 04:18:52.567183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:50.815 [2024-11-26 04:18:52.567220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-26 04:18:52.567248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.815 [2024-11-26 04:18:52.570436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:50.815 [2024-11-26 04:18:52.570471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-26 04:18:52.570498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.815 [2024-11-26 04:18:52.574516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:50.815 [2024-11-26 04:18:52.574552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.815 [2024-11-26 04:18:52.574578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.075 [2024-11-26 04:18:52.577574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.075 [2024-11-26 04:18:52.577606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.075 [2024-11-26 04:18:52.577633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.075 [2024-11-26 04:18:52.581892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.075 [2024-11-26 04:18:52.581929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.075 [2024-11-26 04:18:52.581956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.075 [2024-11-26 04:18:52.585667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.075 [2024-11-26 04:18:52.585701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.075 [2024-11-26 04:18:52.585742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.075 [2024-11-26 04:18:52.589215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.075 [2024-11-26 04:18:52.589250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.075 [2024-11-26 04:18:52.589277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.075 [2024-11-26 04:18:52.592484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.075 [2024-11-26 04:18:52.592521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.075 [2024-11-26 04:18:52.592547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.075 [2024-11-26 04:18:52.596162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.076 [2024-11-26 04:18:52.596199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.076 [2024-11-26 04:18:52.596226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.076 [2024-11-26 04:18:52.600154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.076 [2024-11-26 04:18:52.600190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.076 [2024-11-26 04:18:52.600216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.076 [2024-11-26 04:18:52.603514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.076 [2024-11-26 04:18:52.603550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.076 [2024-11-26 04:18:52.603577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.076 [2024-11-26 04:18:52.607161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.076 [2024-11-26 04:18:52.607198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.076 [2024-11-26 04:18:52.607224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.076 [2024-11-26 04:18:52.611097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.076 [2024-11-26 04:18:52.611135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.076 [2024-11-26 04:18:52.611162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.076 [2024-11-26 04:18:52.615169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.076 [2024-11-26 04:18:52.615206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.076 [2024-11-26 04:18:52.615232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.076 [2024-11-26 04:18:52.619061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.076 [2024-11-26 04:18:52.619098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.076 [2024-11-26 04:18:52.619125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.076 [2024-11-26 04:18:52.621928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.076 [2024-11-26 04:18:52.621975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.076 [2024-11-26 04:18:52.622025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.076 [2024-11-26 04:18:52.625382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.076 [2024-11-26 04:18:52.625414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.076 [2024-11-26 04:18:52.625441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.076 [2024-11-26 04:18:52.629094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.076 [2024-11-26 04:18:52.629130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.076 [2024-11-26 04:18:52.629156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.076 [2024-11-26 04:18:52.632975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.076 [2024-11-26 04:18:52.633012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.076 [2024-11-26 04:18:52.633039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.076 [2024-11-26 04:18:52.636304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.076 [2024-11-26 04:18:52.636341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.076 [2024-11-26 04:18:52.636368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.076 [2024-11-26 04:18:52.639746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.076 [2024-11-26 04:18:52.639782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.076 [2024-11-26 04:18:52.639808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.076 [2024-11-26 04:18:52.643160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.076 [2024-11-26 04:18:52.643198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.076 [2024-11-26 04:18:52.643225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.076 [2024-11-26 04:18:52.646881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.076 [2024-11-26 04:18:52.646917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.076 [2024-11-26 04:18:52.646943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.076 [2024-11-26 04:18:52.650409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.076 [2024-11-26 04:18:52.650446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.076 [2024-11-26 04:18:52.650472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.076 [2024-11-26 04:18:52.654116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.076 [2024-11-26 04:18:52.654170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.076 [2024-11-26 04:18:52.654198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.076 [2024-11-26 04:18:52.657694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.076 [2024-11-26 04:18:52.657751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.076 [2024-11-26 04:18:52.657779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.076 [2024-11-26 04:18:52.661525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.076 [2024-11-26 04:18:52.661558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.076 [2024-11-26 04:18:52.661585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.076 [2024-11-26 04:18:52.665076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.076 [2024-11-26 04:18:52.665112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.076 [2024-11-26 04:18:52.665138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.076 [2024-11-26 04:18:52.668972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.076 [2024-11-26 04:18:52.669008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.076 [2024-11-26 04:18:52.669034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.076 [2024-11-26 04:18:52.672572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.076 [2024-11-26 04:18:52.672609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.076 [2024-11-26 04:18:52.672635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.076 [2024-11-26 04:18:52.676511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.076 [2024-11-26 04:18:52.676548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.076 [2024-11-26 04:18:52.676574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.076 [2024-11-26 04:18:52.679697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.679741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.679768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.683200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.683236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.683263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.686915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.686952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.686978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.690555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.690592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.690618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.694524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.694561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.694587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.698031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.698082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.698095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.702053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.702104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.702116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.706106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.706158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.706170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.710248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.710301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.710327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.714351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.714389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.714415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.717832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.717880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.717892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.721200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.721233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.721259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.724875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.724911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.724937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.728653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.728688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.728716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.732413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.732451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.732477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.735869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.735921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.735948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.739636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.739674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.739701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.743497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.743535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.743562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.747351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.747388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.747415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.750631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.750667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.750693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.753922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.753969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.754019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.757514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.757547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.757574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.761392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.761428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.761454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.765432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.765469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.765495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.768296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.768332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.768359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.771928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.771980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.772006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.775688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.775734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.077 [2024-11-26 04:18:52.775761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.077 [2024-11-26 04:18:52.779344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.077 [2024-11-26 04:18:52.779395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-26 04:18:52.779421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.078 [2024-11-26 04:18:52.782843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.078 [2024-11-26 04:18:52.782894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-26 04:18:52.782921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.078 [2024-11-26 04:18:52.786847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.078 [2024-11-26 04:18:52.786907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-26 04:18:52.786934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.078 [2024-11-26 04:18:52.791496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.078 [2024-11-26 04:18:52.791550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-26 04:18:52.791577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.078 [2024-11-26 04:18:52.795408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.078 [2024-11-26 04:18:52.795459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-26 04:18:52.795487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.078 [2024-11-26 04:18:52.799520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.078 [2024-11-26 04:18:52.799572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-26 04:18:52.799598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.078 [2024-11-26 04:18:52.803532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.078 [2024-11-26 04:18:52.803567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-26 04:18:52.803595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.078 [2024-11-26 04:18:52.807517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.078 [2024-11-26 04:18:52.807553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-26 04:18:52.807580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.078 [2024-11-26 04:18:52.811622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.078 [2024-11-26 04:18:52.811658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-26 04:18:52.811686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.078 [2024-11-26 04:18:52.816276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.078 [2024-11-26 04:18:52.816311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-26 04:18:52.816339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.078 [2024-11-26 04:18:52.819919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.078 [2024-11-26 04:18:52.819959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-26 04:18:52.819987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.078 [2024-11-26 04:18:52.824371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.078 [2024-11-26 04:18:52.824569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-26 04:18:52.824603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.078 [2024-11-26 04:18:52.828735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.078 [2024-11-26 04:18:52.828945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-26 04:18:52.829134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.078 [2024-11-26 04:18:52.833612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.078 [2024-11-26 04:18:52.833838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.078 [2024-11-26 04:18:52.834075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.339 [2024-11-26 04:18:52.838967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.339 [2024-11-26 04:18:52.839167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.339 [2024-11-26 04:18:52.839304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.339 [2024-11-26 04:18:52.843888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.339 [2024-11-26 04:18:52.844118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.339 [2024-11-26 04:18:52.844270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.339 [2024-11-26 04:18:52.847864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.339 [2024-11-26 04:18:52.848065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.339 [2024-11-26 04:18:52.848198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.339 [2024-11-26 04:18:52.852224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.339 [2024-11-26 04:18:52.852437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.339 [2024-11-26 04:18:52.852557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.339 [2024-11-26 04:18:52.856261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.339 [2024-11-26 04:18:52.856300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.339 [2024-11-26 04:18:52.856328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.339 [2024-11-26 04:18:52.860023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.339 [2024-11-26 04:18:52.860059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.339 [2024-11-26 04:18:52.860087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.339 [2024-11-26 04:18:52.864005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.339 [2024-11-26 04:18:52.864042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.339 [2024-11-26 04:18:52.864070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.339 [2024-11-26 04:18:52.867033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.339 [2024-11-26 04:18:52.867072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.339 [2024-11-26 04:18:52.867100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.339 [2024-11-26 04:18:52.871061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.339 [2024-11-26 04:18:52.871097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.339 [2024-11-26 04:18:52.871124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.339 [2024-11-26 04:18:52.874900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.339 [2024-11-26 04:18:52.874934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.339 [2024-11-26 04:18:52.874962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.339 [2024-11-26 04:18:52.878745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.339 [2024-11-26 04:18:52.878789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.339 [2024-11-26 04:18:52.878816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.339 [2024-11-26 04:18:52.883117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.339 [2024-11-26 04:18:52.883151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.339 [2024-11-26 04:18:52.883179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.339 [2024-11-26 04:18:52.886543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.339 [2024-11-26 04:18:52.886579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.339 [2024-11-26 04:18:52.886606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.339 [2024-11-26 04:18:52.890341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.339 [2024-11-26 04:18:52.890377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.339 [2024-11-26 04:18:52.890404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.339 [2024-11-26 04:18:52.893836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.339 [2024-11-26 04:18:52.893870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.339 [2024-11-26 04:18:52.893897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.339 [2024-11-26 04:18:52.897501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.339 [2024-11-26 04:18:52.897535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.339 [2024-11-26 04:18:52.897563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.339 [2024-11-26 04:18:52.901601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.339 [2024-11-26 04:18:52.901634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.339 [2024-11-26 04:18:52.901662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.339 [2024-11-26 04:18:52.906051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.339 [2024-11-26 04:18:52.906087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.339 [2024-11-26 04:18:52.906116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.339 [2024-11-26 04:18:52.909582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.339 [2024-11-26 04:18:52.909615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.339 [2024-11-26 04:18:52.909642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.339 [2024-11-26 04:18:52.913469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.339 [2024-11-26 04:18:52.913503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.339 [2024-11-26 04:18:52.913530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.340 [2024-11-26 04:18:52.917539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.340 [2024-11-26 04:18:52.917574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-26 04:18:52.917601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.340 [2024-11-26 04:18:52.920788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.340 [2024-11-26 04:18:52.920821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-26 04:18:52.920848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.340 [2024-11-26 04:18:52.924378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.340 [2024-11-26 04:18:52.924413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-26 04:18:52.924440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.340 [2024-11-26 04:18:52.928554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.340 [2024-11-26 04:18:52.928589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-26 04:18:52.928616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.340 [2024-11-26 04:18:52.932322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.340 [2024-11-26 04:18:52.932503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-26 04:18:52.932535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.340 [2024-11-26 04:18:52.936394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.340 [2024-11-26 04:18:52.936431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-26 04:18:52.936458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.340 [2024-11-26 04:18:52.939685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.340 [2024-11-26 04:18:52.939779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-26 04:18:52.939808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.340 [2024-11-26 04:18:52.943481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.340 [2024-11-26 04:18:52.943518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-26 04:18:52.943546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.340 [2024-11-26 04:18:52.946979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.340 [2024-11-26 04:18:52.947014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-26 04:18:52.947042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.340 [2024-11-26 04:18:52.950631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.340 [2024-11-26 04:18:52.950667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-26 04:18:52.950694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.340 [2024-11-26 04:18:52.954659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.340 [2024-11-26 04:18:52.954698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-26 04:18:52.954751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.340 [2024-11-26 04:18:52.958774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.340 [2024-11-26 04:18:52.958817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-26 04:18:52.958845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.340 [2024-11-26 04:18:52.962342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.340 [2024-11-26 04:18:52.962377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-26 04:18:52.962388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.340 [2024-11-26 04:18:52.966781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.340 [2024-11-26 04:18:52.966814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-26 04:18:52.966824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.340 [2024-11-26 04:18:52.970610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.340 [2024-11-26 04:18:52.970647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-26 04:18:52.970658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.340 [2024-11-26 04:18:52.974068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.340 [2024-11-26 04:18:52.974115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-26 04:18:52.974142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.340 [2024-11-26 04:18:52.977608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.340 [2024-11-26 04:18:52.977762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-26 04:18:52.977793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.340 [2024-11-26 04:18:52.981431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.340 [2024-11-26 04:18:52.981465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-26 04:18:52.981476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.340 [2024-11-26 04:18:52.985518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.340 [2024-11-26 04:18:52.985687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-26 04:18:52.985841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.340 [2024-11-26 04:18:52.989136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.340 [2024-11-26 04:18:52.989172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-26 04:18:52.989184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.340 [2024-11-26 04:18:52.992543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.340 [2024-11-26 04:18:52.992579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.340 [2024-11-26 04:18:52.992591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.341 [2024-11-26 04:18:52.996533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.341 [2024-11-26 04:18:52.996568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-26 04:18:52.996580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.341 [2024-11-26 04:18:53.000280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.341 [2024-11-26 04:18:53.000314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-26 04:18:53.000325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.341 [2024-11-26 04:18:53.003868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.341 [2024-11-26 04:18:53.003903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-26 04:18:53.003932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.341 [2024-11-26 04:18:53.007953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.341 [2024-11-26 04:18:53.007987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-26 04:18:53.008014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.341 [2024-11-26 04:18:53.011412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.341 [2024-11-26 04:18:53.011447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-26 04:18:53.011458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.341 [2024-11-26 04:18:53.015119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.341 [2024-11-26 04:18:53.015154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-26 04:18:53.015166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.341 [2024-11-26 04:18:53.019158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.341 [2024-11-26 04:18:53.019193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-26 04:18:53.019204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.341 [2024-11-26 04:18:53.023008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.341 [2024-11-26 04:18:53.023043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-26 04:18:53.023054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.341 [2024-11-26 04:18:53.026456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.341 [2024-11-26 04:18:53.026490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-26 04:18:53.026501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.341 [2024-11-26 04:18:53.029614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.341 [2024-11-26 04:18:53.029647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-26 04:18:53.029675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.341 [2024-11-26 04:18:53.033184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.341 [2024-11-26 04:18:53.033220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-26 04:18:53.033231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.341 [2024-11-26 04:18:53.037133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.341 [2024-11-26 04:18:53.037169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-26 04:18:53.037181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.341 [2024-11-26 04:18:53.040960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.341 [2024-11-26 04:18:53.040995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-26 04:18:53.041007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.341 [2024-11-26 04:18:53.043898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.341 [2024-11-26 04:18:53.043934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-26 04:18:53.043961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.341 [2024-11-26 04:18:53.047014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.341 [2024-11-26 04:18:53.047049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-26 04:18:53.047061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.341 [2024-11-26 04:18:53.050813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.341 [2024-11-26 04:18:53.050848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-26 04:18:53.050859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.341 [2024-11-26 04:18:53.054765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.341 [2024-11-26 04:18:53.054810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-26 04:18:53.054822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.341 [2024-11-26 04:18:53.057989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.341 [2024-11-26 04:18:53.058046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-26 04:18:53.058073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.341 [2024-11-26 04:18:53.061959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.341 [2024-11-26 04:18:53.062016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-26 04:18:53.062044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.341 [2024-11-26 04:18:53.066166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.341 [2024-11-26 04:18:53.066365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.341 [2024-11-26 04:18:53.066381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.341 [2024-11-26 04:18:53.069837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.341 [2024-11-26 04:18:53.069877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-26 04:18:53.069905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.342 [2024-11-26 04:18:53.073458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.342 [2024-11-26 04:18:53.073493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-26 04:18:53.073519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.342 [2024-11-26 04:18:53.077298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.342 [2024-11-26 04:18:53.077497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-26 04:18:53.077514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.342 [2024-11-26 04:18:53.082065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.342 [2024-11-26 04:18:53.082104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-26 04:18:53.082133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.342 [2024-11-26 04:18:53.086781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.342 [2024-11-26 04:18:53.086829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-26 04:18:53.086859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.342 [2024-11-26 04:18:53.090616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.342 [2024-11-26 04:18:53.090650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-26 04:18:53.090661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.342 [2024-11-26 04:18:53.094785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.342 [2024-11-26 04:18:53.094834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-26 04:18:53.094862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.342 [2024-11-26 04:18:53.099148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.342 [2024-11-26 04:18:53.099183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.342 [2024-11-26 04:18:53.099194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.602 [2024-11-26 04:18:53.102899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.602 [2024-11-26 04:18:53.102936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-26 04:18:53.102964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.602 [2024-11-26 04:18:53.107177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.602 [2024-11-26 04:18:53.107343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-26 04:18:53.107359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.602 [2024-11-26 04:18:53.110338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.602 [2024-11-26 04:18:53.110385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-26 04:18:53.110427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.602 [2024-11-26 04:18:53.114210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.602 [2024-11-26 04:18:53.114246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-26 04:18:53.114258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.602 [2024-11-26 04:18:53.117381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.602 [2024-11-26 04:18:53.117414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-26 04:18:53.117426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.602 [2024-11-26 04:18:53.121807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.602 [2024-11-26 04:18:53.121841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-26 04:18:53.121868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.602 [2024-11-26 04:18:53.125968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.602 [2024-11-26 04:18:53.126022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-26 04:18:53.126049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.602 [2024-11-26 04:18:53.130177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.602 [2024-11-26 04:18:53.130226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-26 04:18:53.130239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.602 [2024-11-26 04:18:53.134365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.602 [2024-11-26 04:18:53.134399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-26 04:18:53.134410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.602 [2024-11-26 04:18:53.137756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.602 [2024-11-26 04:18:53.137787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-26 04:18:53.137814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.602 [2024-11-26 04:18:53.141686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.602 [2024-11-26 04:18:53.141729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-26 04:18:53.141758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.602 [2024-11-26 04:18:53.145328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.602 [2024-11-26 04:18:53.145363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-26 04:18:53.145373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.602 [2024-11-26 04:18:53.148899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.602 [2024-11-26 04:18:53.148933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-26 04:18:53.148944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.602 [2024-11-26 04:18:53.152678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.602 [2024-11-26 04:18:53.152720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-26 04:18:53.152732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.602 [2024-11-26 04:18:53.155778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.602 [2024-11-26 04:18:53.155812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-26 04:18:53.155823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.602 [2024-11-26 04:18:53.159315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.602 [2024-11-26 04:18:53.159351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-26 04:18:53.159363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.602 [2024-11-26 04:18:53.163194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.602 [2024-11-26 04:18:53.163229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.602 [2024-11-26 04:18:53.163241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.603 [2024-11-26 04:18:53.166792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.603 [2024-11-26 04:18:53.166824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-26 04:18:53.166835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.603 [2024-11-26 04:18:53.170482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.603 [2024-11-26 04:18:53.170515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-26 04:18:53.170526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.603 [2024-11-26 04:18:53.174657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.603 [2024-11-26 04:18:53.174693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-26 04:18:53.174703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.603 [2024-11-26 04:18:53.178131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.603 [2024-11-26 04:18:53.178168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-26 04:18:53.178195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.603 [2024-11-26 04:18:53.181757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.603 [2024-11-26 04:18:53.181788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-26 04:18:53.181815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.603 [2024-11-26 04:18:53.185586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.603 [2024-11-26 04:18:53.185619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-26 04:18:53.185631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.603 [2024-11-26 04:18:53.189298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.603 [2024-11-26 04:18:53.189333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-26 04:18:53.189345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.603 [2024-11-26 04:18:53.192822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.603 [2024-11-26 04:18:53.192856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-26 04:18:53.192867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.603 [2024-11-26 04:18:53.196091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.603 [2024-11-26 04:18:53.196125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-26 04:18:53.196136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.603 [2024-11-26 04:18:53.199578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.603 [2024-11-26 04:18:53.199614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-26 04:18:53.199625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.603 [2024-11-26 04:18:53.203199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.603 [2024-11-26 04:18:53.203235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-26 04:18:53.203262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.603 [2024-11-26 04:18:53.206807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.603 [2024-11-26 04:18:53.206842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-26 04:18:53.206869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.603 [2024-11-26 04:18:53.210973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.603 [2024-11-26 04:18:53.211009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-26 04:18:53.211043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.603 [2024-11-26 04:18:53.214662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.603 [2024-11-26 04:18:53.214857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-26 04:18:53.214888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.603 [2024-11-26 04:18:53.219181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.603 [2024-11-26 04:18:53.219388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-26 04:18:53.219406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.603 [2024-11-26 04:18:53.223636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.603 [2024-11-26 04:18:53.223748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-26 04:18:53.223762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.603 [2024-11-26 04:18:53.228110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.603 [2024-11-26 04:18:53.228145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-26 04:18:53.228172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.603 [2024-11-26 04:18:53.231612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.603 [2024-11-26 04:18:53.231647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-26 04:18:53.231672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.603 [2024-11-26 04:18:53.235744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.603 [2024-11-26 04:18:53.235793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-26 04:18:53.235820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.603 [2024-11-26 04:18:53.239324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.603 [2024-11-26 04:18:53.239361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-26 04:18:53.239387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.603 [2024-11-26 04:18:53.243496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.603 [2024-11-26 04:18:53.243534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-26 04:18:53.243561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.603 [2024-11-26 04:18:53.247179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.603 [2024-11-26 04:18:53.247216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-26 04:18:53.247243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.603 [2024-11-26 04:18:53.251155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.603 [2024-11-26 04:18:53.251190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.603 [2024-11-26 04:18:53.251216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.604 [2024-11-26 04:18:53.254766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.604 [2024-11-26 04:18:53.254800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-26 04:18:53.254827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.604 [2024-11-26 04:18:53.258277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.604 [2024-11-26 04:18:53.258343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-26 04:18:53.258354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.604 [2024-11-26 04:18:53.261743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.604 [2024-11-26 04:18:53.261789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-26 04:18:53.261816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.604 [2024-11-26 04:18:53.265820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.604 [2024-11-26 04:18:53.265868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-26 04:18:53.265895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.604 [2024-11-26 04:18:53.269476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.604 [2024-11-26 04:18:53.269509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-26 04:18:53.269535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.604 [2024-11-26 04:18:53.273338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.604 [2024-11-26 04:18:53.273374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-26 04:18:53.273401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.604 [2024-11-26 04:18:53.276149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.604 [2024-11-26 04:18:53.276184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-26 04:18:53.276210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.604 [2024-11-26 04:18:53.279944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.604 [2024-11-26 04:18:53.279981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-26 04:18:53.280007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.604 [2024-11-26 04:18:53.283593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.604 [2024-11-26 04:18:53.283630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-26 04:18:53.283657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.604 [2024-11-26 04:18:53.287501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.604 [2024-11-26 04:18:53.287538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-26 04:18:53.287565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.604 [2024-11-26 04:18:53.290838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.604 [2024-11-26 04:18:53.290888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-26 04:18:53.290915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.604 [2024-11-26 04:18:53.294803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.604 [2024-11-26 04:18:53.294838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-26 04:18:53.294865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.604 [2024-11-26 04:18:53.298477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.604 [2024-11-26 04:18:53.298514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-26 04:18:53.298540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.604 [2024-11-26 04:18:53.302178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.604 [2024-11-26 04:18:53.302220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-26 04:18:53.302233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.604 [2024-11-26 04:18:53.306159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.604 [2024-11-26 04:18:53.306211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-26 04:18:53.306238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.604 [2024-11-26 04:18:53.309738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.604 [2024-11-26 04:18:53.309783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-26 04:18:53.309809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.604 [2024-11-26 04:18:53.313886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.604 [2024-11-26 04:18:53.313935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-26 04:18:53.313962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.604 [2024-11-26 04:18:53.318050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.604 [2024-11-26 04:18:53.318085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-26 04:18:53.318113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.604 [2024-11-26 04:18:53.322214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.604 [2024-11-26 04:18:53.322264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-26 04:18:53.322305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.604 [2024-11-26 04:18:53.326329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.604 [2024-11-26 04:18:53.326364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-26 04:18:53.326391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.604 [2024-11-26 04:18:53.331061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.604 [2024-11-26 04:18:53.331097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-26 04:18:53.331125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.604 [2024-11-26 04:18:53.334400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.604 [2024-11-26 04:18:53.334436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.604 [2024-11-26 04:18:53.334461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.605 [2024-11-26 04:18:53.338249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.605 [2024-11-26 04:18:53.338334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.605 [2024-11-26 04:18:53.338345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.605 [2024-11-26 04:18:53.342402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.605 [2024-11-26 04:18:53.342438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.605 [2024-11-26 04:18:53.342465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.605 [2024-11-26 04:18:53.346405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.605 [2024-11-26 04:18:53.346441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.605 [2024-11-26 04:18:53.346467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.605 [2024-11-26 04:18:53.349977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.605 [2024-11-26 04:18:53.350032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.605 [2024-11-26 04:18:53.350059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.605 [2024-11-26 04:18:53.352998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.605 [2024-11-26 04:18:53.353029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.605 [2024-11-26 04:18:53.353056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.605 [2024-11-26 04:18:53.356252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.605 [2024-11-26 04:18:53.356289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.605 [2024-11-26 04:18:53.356316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.605 [2024-11-26 04:18:53.359733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.605 [2024-11-26 04:18:53.359770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.605 [2024-11-26 04:18:53.359796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.364075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.364111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.364138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.368100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.368137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.368163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.371988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.372025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.372052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.374962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.374998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.375024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.378916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.378952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.378978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.382565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.382602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.382630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.385966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.386045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.386073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.389474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.389505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.389531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.393295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.393331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.393357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.396820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.396858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.396884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.400410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.400447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.400474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.404431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.404467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.404493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.408011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.408048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.408074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.411639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.411675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.411701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.414624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.414661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.414687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.418084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.418134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.418160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.421425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.421457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.421483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.425249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.425287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.425313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.429087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.429123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.429150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.432658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.432694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.432720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.436493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.436530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.436557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.440153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.440190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.440216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.443582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.443619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.443645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.447560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.447597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.447623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.451083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.451120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.451147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.454456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.454493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.454519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.457881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.457928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.457955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.461328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.461361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.461388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.465191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.465227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.465253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.469021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.469057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.469083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.472633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.472670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.472697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.476619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.476655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.476682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.480220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.480256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.480282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.483618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.483653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.483680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.487669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.487705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.487745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.491938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.491976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.492003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.495558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.495595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.495621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.499309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.499346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.499373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.502845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.866 [2024-11-26 04:18:53.502883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.866 [2024-11-26 04:18:53.502909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.866 [2024-11-26 04:18:53.506237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.506306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.506334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.509876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.509925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.509951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.513823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.513870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.513897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.516224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.516256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.516282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.519804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.519839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.519865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.523680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.523739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.523752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.527232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.527266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.527293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.531214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.531249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.531276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.535368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.535403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.535429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.538930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.538967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.538994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.542208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.542261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.542287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.545977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.546033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.546061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.549815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.549863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.549873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.553309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.553342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.553368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.557200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.557237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.557263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.561047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.561098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.561139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.563991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.564028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.564055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.567512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.567546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.567572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.571362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.571399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.571425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.575154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.575189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.575216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.579230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.579265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.579291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.583002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.583037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.583063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.586379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.586416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.586442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.589783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.589830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.589857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.593332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.593364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.593390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.596942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.596980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.597007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.600537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.600573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.600600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.604135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.604170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.604196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.608304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.608341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.608367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.611835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.611872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.611898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.615327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.615365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.615392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.619152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.619189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.619216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.622026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.622073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.622100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.867 [2024-11-26 04:18:53.625805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:51.867 [2024-11-26 04:18:53.625853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.867 [2024-11-26 04:18:53.625879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.128 [2024-11-26 04:18:53.629000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.128 [2024-11-26 04:18:53.629032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.128 [2024-11-26 04:18:53.629059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.128 [2024-11-26 04:18:53.633330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.128 [2024-11-26 04:18:53.633365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.128 [2024-11-26 04:18:53.633392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.128 [2024-11-26 04:18:53.636833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.128 [2024-11-26 04:18:53.636881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.128 [2024-11-26 04:18:53.636908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.128 [2024-11-26 04:18:53.640619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.128 [2024-11-26 04:18:53.640654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.128 [2024-11-26 04:18:53.640680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.128 [2024-11-26 04:18:53.644055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.128 [2024-11-26 04:18:53.644091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.128 [2024-11-26 04:18:53.644117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.128 [2024-11-26 04:18:53.648203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.128 [2024-11-26 04:18:53.648240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.128 [2024-11-26 04:18:53.648266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.128 [2024-11-26 04:18:53.651782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.128 [2024-11-26 04:18:53.651818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.128 [2024-11-26 04:18:53.651844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.128 [2024-11-26 04:18:53.655551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.128 [2024-11-26 04:18:53.655589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.128 [2024-11-26 04:18:53.655615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.128 [2024-11-26 04:18:53.658942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.128 [2024-11-26 04:18:53.658993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.128 [2024-11-26 04:18:53.659020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.128 [2024-11-26 04:18:53.662304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.128 [2024-11-26 04:18:53.662341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.128 [2024-11-26 04:18:53.662368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.128 [2024-11-26 04:18:53.666142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.128 [2024-11-26 04:18:53.666180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.128 [2024-11-26 04:18:53.666207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.128 [2024-11-26 04:18:53.669980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.128 [2024-11-26 04:18:53.670052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.128 [2024-11-26 04:18:53.670080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.128 [2024-11-26 04:18:53.674252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.128 [2024-11-26 04:18:53.674305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.128 [2024-11-26 04:18:53.674347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.128 [2024-11-26 04:18:53.678521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.128 [2024-11-26 04:18:53.678554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.128 [2024-11-26 04:18:53.678581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.128 [2024-11-26 04:18:53.682156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.128 [2024-11-26 04:18:53.682189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.128 [2024-11-26 04:18:53.682215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.128 [2024-11-26 04:18:53.686190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.129 [2024-11-26 04:18:53.686241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.129 [2024-11-26 04:18:53.686252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.129 [2024-11-26 04:18:53.689866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.129 [2024-11-26 04:18:53.689898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.129 [2024-11-26 04:18:53.689924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.129 [2024-11-26 04:18:53.693089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.129 [2024-11-26 04:18:53.693122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.129 [2024-11-26 04:18:53.693149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.129 [2024-11-26 04:18:53.697136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.129 [2024-11-26 04:18:53.697169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.129 [2024-11-26 04:18:53.697195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.129 [2024-11-26 04:18:53.700917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.129 [2024-11-26 04:18:53.700950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.129 [2024-11-26 04:18:53.700977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.129 [2024-11-26 04:18:53.705023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.129 [2024-11-26 04:18:53.705055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.129 [2024-11-26 04:18:53.705082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.129 [2024-11-26 04:18:53.709082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.129 [2024-11-26 04:18:53.709113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.129 [2024-11-26 04:18:53.709139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.129 [2024-11-26 04:18:53.713188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.129 [2024-11-26 04:18:53.713220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.129 [2024-11-26 04:18:53.713246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.129 [2024-11-26 04:18:53.716666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.129 [2024-11-26 04:18:53.716698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.129 [2024-11-26 04:18:53.716734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.129 [2024-11-26 04:18:53.719999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.129 [2024-11-26 04:18:53.720050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.129 [2024-11-26 04:18:53.720062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.129 [2024-11-26 04:18:53.723186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.129 [2024-11-26 04:18:53.723219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.129 [2024-11-26 04:18:53.723246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.129 [2024-11-26 04:18:53.726604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.129 [2024-11-26 04:18:53.726637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.129 [2024-11-26 04:18:53.726663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.129 [2024-11-26 04:18:53.730199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.129 [2024-11-26 04:18:53.730232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.129 [2024-11-26 04:18:53.730259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.129 [2024-11-26 04:18:53.733407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.129 [2024-11-26 04:18:53.733438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.129 [2024-11-26 04:18:53.733465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.129 [2024-11-26 04:18:53.736899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.129 [2024-11-26 04:18:53.736948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.129 [2024-11-26 04:18:53.736975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.129 [2024-11-26 04:18:53.740825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.129 [2024-11-26 04:18:53.740872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.129 [2024-11-26 04:18:53.740899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.129 [2024-11-26 04:18:53.745120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.129 [2024-11-26 04:18:53.745152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.129 [2024-11-26 04:18:53.745179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.129 [2024-11-26 04:18:53.748397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.129 [2024-11-26 04:18:53.748430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.129 [2024-11-26 04:18:53.748456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.129 [2024-11-26 04:18:53.752111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.129 [2024-11-26 04:18:53.752144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.129 [2024-11-26 04:18:53.752171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.129 [2024-11-26 04:18:53.755481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.129 [2024-11-26 04:18:53.755514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.129 [2024-11-26 04:18:53.755540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.129 [2024-11-26 04:18:53.758943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.129 [2024-11-26 04:18:53.758975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.129 [2024-11-26 04:18:53.759001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.129 [2024-11-26 04:18:53.762693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.129 [2024-11-26 04:18:53.762734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.129 [2024-11-26 04:18:53.762761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.129 [2024-11-26 04:18:53.766144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.130 [2024-11-26 04:18:53.766193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.130 [2024-11-26 04:18:53.766220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.130 [2024-11-26 04:18:53.769861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.130 [2024-11-26 04:18:53.769893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.130 [2024-11-26 04:18:53.769920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.130 [2024-11-26 04:18:53.773436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.130 [2024-11-26 04:18:53.773471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.130 [2024-11-26 04:18:53.773497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.130 [2024-11-26 04:18:53.776493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.130 [2024-11-26 04:18:53.776529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.130 [2024-11-26 04:18:53.776556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.130 [2024-11-26 04:18:53.779788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.130 [2024-11-26 04:18:53.779820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.130 [2024-11-26 04:18:53.779847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.130 [2024-11-26 04:18:53.783641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.130 [2024-11-26 04:18:53.783675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.130 [2024-11-26 04:18:53.783702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.130 [2024-11-26 04:18:53.787330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.130 [2024-11-26 04:18:53.787364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.130 [2024-11-26 04:18:53.787390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.130 [2024-11-26 04:18:53.791183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.130 [2024-11-26 04:18:53.791216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.130 [2024-11-26 04:18:53.791243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.130 [2024-11-26 04:18:53.794379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.130 [2024-11-26 04:18:53.794412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.130 [2024-11-26 04:18:53.794438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.130 [2024-11-26 04:18:53.797302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.130 [2024-11-26 04:18:53.797334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.130 [2024-11-26 04:18:53.797360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.130 [2024-11-26 04:18:53.801305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.130 [2024-11-26 04:18:53.801337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.130 [2024-11-26 04:18:53.801363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.130 [2024-11-26 04:18:53.804447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.130 [2024-11-26 04:18:53.804494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.130 [2024-11-26 04:18:53.804521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.130 [2024-11-26 04:18:53.809036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.130 [2024-11-26 04:18:53.809086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.130 [2024-11-26 04:18:53.809124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.130 [2024-11-26 04:18:53.812168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.130 [2024-11-26 04:18:53.812221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.130 [2024-11-26 04:18:53.812248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.130 [2024-11-26 04:18:53.816507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.130 [2024-11-26 04:18:53.816539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.130 [2024-11-26 04:18:53.816566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.130 [2024-11-26 04:18:53.820096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.130 [2024-11-26 04:18:53.820128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.130 [2024-11-26 04:18:53.820155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.130 [2024-11-26 04:18:53.824379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.130 [2024-11-26 04:18:53.824412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.130 [2024-11-26 04:18:53.824438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.130 [2024-11-26 04:18:53.828114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.130 [2024-11-26 04:18:53.828147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.130 [2024-11-26 04:18:53.828173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.130 [2024-11-26 04:18:53.831963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.130 [2024-11-26 04:18:53.831995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.130 [2024-11-26 04:18:53.832022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.130 [2024-11-26 04:18:53.835534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.130 [2024-11-26 04:18:53.835566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.130 [2024-11-26 04:18:53.835593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.130 [2024-11-26 04:18:53.839495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.130 [2024-11-26 04:18:53.839527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.130 [2024-11-26 04:18:53.839553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.130 [2024-11-26 04:18:53.842825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.131 [2024-11-26 04:18:53.842872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.131 [2024-11-26 04:18:53.842899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.131 [2024-11-26 04:18:53.846774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.131 [2024-11-26 04:18:53.846820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.131 [2024-11-26 04:18:53.846846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.131 [2024-11-26 04:18:53.850366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.131 [2024-11-26 04:18:53.850397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.131 [2024-11-26 04:18:53.850423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.131 [2024-11-26 04:18:53.854215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.131 [2024-11-26 04:18:53.854262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.131 [2024-11-26 04:18:53.854289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.131 [2024-11-26 04:18:53.858389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.131 [2024-11-26 04:18:53.858421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.131 [2024-11-26 04:18:53.858448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.131 [2024-11-26 04:18:53.861961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.131 [2024-11-26 04:18:53.861991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.131 [2024-11-26 04:18:53.862024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.131 [2024-11-26 04:18:53.865656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.131 [2024-11-26 04:18:53.865687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.131 [2024-11-26 04:18:53.865713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.131 [2024-11-26 04:18:53.869469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.131 [2024-11-26 04:18:53.869504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.131 [2024-11-26 04:18:53.869531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.131 [2024-11-26 04:18:53.873166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.131 [2024-11-26 04:18:53.873201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.131 [2024-11-26 04:18:53.873228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.131 [2024-11-26 04:18:53.876577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.131 [2024-11-26 04:18:53.876611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.131 [2024-11-26 04:18:53.876637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.131 [2024-11-26 04:18:53.880300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.131 [2024-11-26 04:18:53.880332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.131 [2024-11-26 04:18:53.880359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.131 [2024-11-26 04:18:53.884872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.131 [2024-11-26 04:18:53.884904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.131 [2024-11-26 04:18:53.884930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.131 [2024-11-26 04:18:53.888660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.131 [2024-11-26 04:18:53.888693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.131 [2024-11-26 04:18:53.888718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.392 [2024-11-26 04:18:53.892899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.392 [2024-11-26 04:18:53.892946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.392 [2024-11-26 04:18:53.892972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.392 [2024-11-26 04:18:53.896204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.392 [2024-11-26 04:18:53.896236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.392 [2024-11-26 04:18:53.896263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.392 [2024-11-26 04:18:53.899979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.392 [2024-11-26 04:18:53.900010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.392 [2024-11-26 04:18:53.900036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.392 [2024-11-26 04:18:53.903906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.392 [2024-11-26 04:18:53.903937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.392 [2024-11-26 04:18:53.903964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.392 [2024-11-26 04:18:53.907386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.392 [2024-11-26 04:18:53.907416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.392 [2024-11-26 04:18:53.907442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.392 [2024-11-26 04:18:53.911124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.392 [2024-11-26 04:18:53.911156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.392 [2024-11-26 04:18:53.911182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.392 [2024-11-26 04:18:53.914569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.392 [2024-11-26 04:18:53.914601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.392 [2024-11-26 04:18:53.914627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.392 [2024-11-26 04:18:53.918123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.392 [2024-11-26 04:18:53.918170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.392 [2024-11-26 04:18:53.918197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.392 [2024-11-26 04:18:53.921788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.392 [2024-11-26 04:18:53.921818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.392 [2024-11-26 04:18:53.921844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.392 [2024-11-26 04:18:53.925276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.392 [2024-11-26 04:18:53.925308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.392 [2024-11-26 04:18:53.925333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.392 [2024-11-26 04:18:53.928707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.392 [2024-11-26 04:18:53.928763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.392 [2024-11-26 04:18:53.928790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.392 [2024-11-26 04:18:53.932635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.392 [2024-11-26 04:18:53.932666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.392 [2024-11-26 04:18:53.932693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.392 [2024-11-26 04:18:53.936610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.392 [2024-11-26 04:18:53.936643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.392 [2024-11-26 04:18:53.936669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.392 [2024-11-26 04:18:53.941067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.392 [2024-11-26 04:18:53.941130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.392 [2024-11-26 04:18:53.941142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.392 [2024-11-26 04:18:53.944355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.392 [2024-11-26 04:18:53.944387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.392 [2024-11-26 04:18:53.944414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.392 [2024-11-26 04:18:53.947997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.392 [2024-11-26 04:18:53.948030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.392 [2024-11-26 04:18:53.948056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.392 [2024-11-26 04:18:53.951841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.392 [2024-11-26 04:18:53.951872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.392 [2024-11-26 04:18:53.951898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.392 [2024-11-26 04:18:53.955375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.392 [2024-11-26 04:18:53.955407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.392 [2024-11-26 04:18:53.955432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.392 [2024-11-26 04:18:53.959574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.392 [2024-11-26 04:18:53.959626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:53.959652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:53.963749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:53.963799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:53.963827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:53.968602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:53.968656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:53.968684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:53.972963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:53.973018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:53.973045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:53.977189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:53.977226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:53.977252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:53.981278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:53.981315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:53.981341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:53.985191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:53.985228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:53.985255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:53.989019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:53.989086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:53.989098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:53.992666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:53.992702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:53.992755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:53.996370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:53.996408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:53.996434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:53.999974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:54.000010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:54.000037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:54.003370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:54.003407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:54.003434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:54.006952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:54.007049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:54.007077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:54.010886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:54.010939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:54.010967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:54.014726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:54.014777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:54.014804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:54.018112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:54.018166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:54.018193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:54.021956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:54.021989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:54.022038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:54.025678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:54.025738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:54.025767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:54.029881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:54.029917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:54.029943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:54.033567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:54.033600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:54.033626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:54.037110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:54.037143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:54.037170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:54.040556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:54.040592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:54.040618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:54.044286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:54.044321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:54.044347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:54.048196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:54.048233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:54.048259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:54.051366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:54.051402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:54.051428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:54.054854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:54.054906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:54.054933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:54.058606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:54.058644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.393 [2024-11-26 04:18:54.058671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.393 [2024-11-26 04:18:54.062034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.393 [2024-11-26 04:18:54.062084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.394 [2024-11-26 04:18:54.062112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.394 [2024-11-26 04:18:54.065601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.394 [2024-11-26 04:18:54.065665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.394 [2024-11-26 04:18:54.065707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.394 [2024-11-26 04:18:54.069685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.394 [2024-11-26 04:18:54.069744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.394 [2024-11-26 04:18:54.069772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.394 [2024-11-26 04:18:54.073130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.394 [2024-11-26 04:18:54.073165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.394 [2024-11-26 04:18:54.073191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.394 [2024-11-26 04:18:54.076159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.394 [2024-11-26 04:18:54.076196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.394 [2024-11-26 04:18:54.076222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.394 [2024-11-26 04:18:54.079764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.394 [2024-11-26 04:18:54.079796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.394 [2024-11-26 04:18:54.079822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.394 [2024-11-26 04:18:54.084142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.394 [2024-11-26 04:18:54.084193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.394 [2024-11-26 04:18:54.084220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.394 [2024-11-26 04:18:54.088013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.394 [2024-11-26 04:18:54.088066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.394 [2024-11-26 04:18:54.088108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.394 [2024-11-26 04:18:54.092146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.394 [2024-11-26 04:18:54.092200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.394 [2024-11-26 04:18:54.092227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.394 [2024-11-26 04:18:54.096190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.394 [2024-11-26 04:18:54.096255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.394 [2024-11-26 04:18:54.096282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.394 [2024-11-26 04:18:54.100600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.394 [2024-11-26 04:18:54.100654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.394 [2024-11-26 04:18:54.100681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.394 [2024-11-26 04:18:54.104516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.394 [2024-11-26 04:18:54.104552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.394 [2024-11-26 04:18:54.104579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.394 [2024-11-26 04:18:54.108332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.394 [2024-11-26 04:18:54.108370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.394 [2024-11-26 04:18:54.108396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.394 [2024-11-26 04:18:54.112053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.394 [2024-11-26 04:18:54.112120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.394 [2024-11-26 04:18:54.112146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.394 [2024-11-26 04:18:54.116339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.394 [2024-11-26 04:18:54.116376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.394 [2024-11-26 04:18:54.116403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.394 [2024-11-26 04:18:54.120100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.394 [2024-11-26 04:18:54.120165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.394 [2024-11-26 04:18:54.120191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.394 [2024-11-26 04:18:54.124505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.394 [2024-11-26 04:18:54.124556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.394 [2024-11-26 04:18:54.124584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.394 [2024-11-26 04:18:54.128794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.394 [2024-11-26 04:18:54.128846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.394 [2024-11-26 04:18:54.128873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.394 [2024-11-26 04:18:54.133266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.394 [2024-11-26 04:18:54.133303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.394 [2024-11-26 04:18:54.133330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.394 [2024-11-26 04:18:54.136973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.394 [2024-11-26 04:18:54.137026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.394 [2024-11-26 04:18:54.137053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.394 [2024-11-26 04:18:54.141249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.394 [2024-11-26 04:18:54.141286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.394 [2024-11-26 04:18:54.141313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.394 [2024-11-26 04:18:54.144095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.394 [2024-11-26 04:18:54.144133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.394 [2024-11-26 04:18:54.144159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.394 [2024-11-26 04:18:54.147813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.394 [2024-11-26 04:18:54.147848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.394 [2024-11-26 04:18:54.147874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.394 [2024-11-26 04:18:54.152694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.394 [2024-11-26 04:18:54.152762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.394 [2024-11-26 04:18:54.152792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.655 [2024-11-26 04:18:54.156608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.655 [2024-11-26 04:18:54.156660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.655 [2024-11-26 04:18:54.156688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.655 [2024-11-26 04:18:54.160076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.655 [2024-11-26 04:18:54.160113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.655 [2024-11-26 04:18:54.160147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.655 [2024-11-26 04:18:54.163630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.655 [2024-11-26 04:18:54.163667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.655 [2024-11-26 04:18:54.163693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.655 [2024-11-26 04:18:54.167473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.655 [2024-11-26 04:18:54.167509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.655 [2024-11-26 04:18:54.167535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.655 [2024-11-26 04:18:54.171222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.171260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.171287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.175140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.175177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.175203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.178716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.178765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.178792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.182120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.182175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.182202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.185484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.185517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.185543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.188761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.188796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.188823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.192353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.192391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.192417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.195953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.195989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.196016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.200011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.200048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.200075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.204001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.204053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.204080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.206877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.206914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.206940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.210937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.210973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.210999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.214549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.214585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.214611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.218382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.218420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.218446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.221550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.221583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.221609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.225231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.225267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.225293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.229157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.229193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.229220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.232792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.232828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.232854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.236834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.236870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.236897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.240464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.240500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.240527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.244245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.244281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.244307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.248252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.248288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.248314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.252364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.252400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.252426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.256509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.256544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.256570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.260068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.260103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.260130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.263830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.263879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.263905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.656 [2024-11-26 04:18:54.267888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.656 [2024-11-26 04:18:54.267923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.656 [2024-11-26 04:18:54.267949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.272106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.272142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.272169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.275787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.275822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.275848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.278497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.278532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.278558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.282462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.282498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.282524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.286306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.286357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.286384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.289639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.289672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.289697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.292998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.293048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.293074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.296526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.296576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.296602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.300564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.300614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.300641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.304252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.304287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.304313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.307862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.307898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.307924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.311865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.311903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.311930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.315328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.315366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.315392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.318933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.318970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.318996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.322698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.322744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.322771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.325881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.325914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.325940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.329448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.329481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.329507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.333022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.333073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.333099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.336554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.336590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.336617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.340114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.340150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.340176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.343970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.344022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.344048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.347654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.347691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.347717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.351127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.351165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.351192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.354891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.354928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.354955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.358358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.358396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.358423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.362303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.362355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.657 [2024-11-26 04:18:54.362381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.657 [2024-11-26 04:18:54.366047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.657 [2024-11-26 04:18:54.366096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.658 [2024-11-26 04:18:54.366123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.658 [2024-11-26 04:18:54.370099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.658 [2024-11-26 04:18:54.370136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.658 [2024-11-26 04:18:54.370163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.658 [2024-11-26 04:18:54.374228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.658 [2024-11-26 04:18:54.374279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.658 [2024-11-26 04:18:54.374306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.658 [2024-11-26 04:18:54.377627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.658 [2024-11-26 04:18:54.377658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.658 [2024-11-26 04:18:54.377684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.658 [2024-11-26 04:18:54.381355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.658 [2024-11-26 04:18:54.381391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.658 [2024-11-26 04:18:54.381417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.658 [2024-11-26 04:18:54.385474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.658 [2024-11-26 04:18:54.385510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.658 [2024-11-26 04:18:54.385536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.658 [2024-11-26 04:18:54.389162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.658 [2024-11-26 04:18:54.389196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.658 [2024-11-26 04:18:54.389222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.658 [2024-11-26 04:18:54.392034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.658 [2024-11-26 04:18:54.392071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.658 [2024-11-26 04:18:54.392098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.658 [2024-11-26 04:18:54.395922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.658 [2024-11-26 04:18:54.395958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.658 [2024-11-26 04:18:54.395984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.658 [2024-11-26 04:18:54.400033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.658 [2024-11-26 04:18:54.400069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.658 [2024-11-26 04:18:54.400096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.658 [2024-11-26 04:18:54.404449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.658 [2024-11-26 04:18:54.404485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.658 [2024-11-26 04:18:54.404511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.658 [2024-11-26 04:18:54.408089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.658 [2024-11-26 04:18:54.408126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.658 [2024-11-26 04:18:54.408153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.658 [2024-11-26 04:18:54.411578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.658 [2024-11-26 04:18:54.411613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.658 [2024-11-26 04:18:54.411640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.918 [2024-11-26 04:18:54.416294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.918 [2024-11-26 04:18:54.416331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.918 [2024-11-26 04:18:54.416358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.918 [2024-11-26 04:18:54.420399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.918 [2024-11-26 04:18:54.420437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.918 [2024-11-26 04:18:54.420464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.918 [2024-11-26 04:18:54.424288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.918 [2024-11-26 04:18:54.424324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.918 [2024-11-26 04:18:54.424350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.918 [2024-11-26 04:18:54.428274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.918 [2024-11-26 04:18:54.428312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.918 [2024-11-26 04:18:54.428339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.918 [2024-11-26 04:18:54.432070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.918 [2024-11-26 04:18:54.432108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.918 [2024-11-26 04:18:54.432134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.918 [2024-11-26 04:18:54.436211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.918 [2024-11-26 04:18:54.436248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.918 [2024-11-26 04:18:54.436274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.918 [2024-11-26 04:18:54.439753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.918 [2024-11-26 04:18:54.439790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.918 [2024-11-26 04:18:54.439816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.918 [2024-11-26 04:18:54.442710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.918 [2024-11-26 04:18:54.442758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.918 [2024-11-26 04:18:54.442786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.918 [2024-11-26 04:18:54.446408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.918 [2024-11-26 04:18:54.446444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.918 [2024-11-26 04:18:54.446470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.918 [2024-11-26 04:18:54.450325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.918 [2024-11-26 04:18:54.450392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.918 [2024-11-26 04:18:54.450419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.918 [2024-11-26 04:18:54.454065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.918 [2024-11-26 04:18:54.454131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.918 [2024-11-26 04:18:54.454158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.918 [2024-11-26 04:18:54.458158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.918 [2024-11-26 04:18:54.458197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.918 [2024-11-26 04:18:54.458208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.918 [2024-11-26 04:18:54.461426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.918 [2024-11-26 04:18:54.461458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.918 [2024-11-26 04:18:54.461484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.918 [2024-11-26 04:18:54.465235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.918 [2024-11-26 04:18:54.465271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.918 [2024-11-26 04:18:54.465297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.918 [2024-11-26 04:18:54.468894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.918 [2024-11-26 04:18:54.468930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.918 [2024-11-26 04:18:54.468956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.918 [2024-11-26 04:18:54.472588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.918 [2024-11-26 04:18:54.472625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.918 [2024-11-26 04:18:54.472651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.918 [2024-11-26 04:18:54.475648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.918 [2024-11-26 04:18:54.475686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.918 [2024-11-26 04:18:54.475712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.918 [2024-11-26 04:18:54.479389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.918 [2024-11-26 04:18:54.479426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.919 [2024-11-26 04:18:54.479452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.919 [2024-11-26 04:18:54.482957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.919 [2024-11-26 04:18:54.482994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.919 [2024-11-26 04:18:54.483021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.919 [2024-11-26 04:18:54.487158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.919 [2024-11-26 04:18:54.487194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.919 [2024-11-26 04:18:54.487220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.919 [2024-11-26 04:18:54.490705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.919 [2024-11-26 04:18:54.490751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.919 [2024-11-26 04:18:54.490779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.919 [2024-11-26 04:18:54.494235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.919 [2024-11-26 04:18:54.494287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.919 [2024-11-26 04:18:54.494313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.919 [2024-11-26 04:18:54.498137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.919 [2024-11-26 04:18:54.498188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.919 [2024-11-26 04:18:54.498216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.919 [2024-11-26 04:18:54.501688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.919 [2024-11-26 04:18:54.501746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.919 [2024-11-26 04:18:54.501773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.919 [2024-11-26 04:18:54.505245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.919 [2024-11-26 04:18:54.505280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.919 [2024-11-26 04:18:54.505307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.919 [2024-11-26 04:18:54.509221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.919 [2024-11-26 04:18:54.509258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.919 [2024-11-26 04:18:54.509285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.919 [2024-11-26 04:18:54.512886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.919 [2024-11-26 04:18:54.512923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.919 [2024-11-26 04:18:54.512949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.919 [2024-11-26 04:18:54.516778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.919 [2024-11-26 04:18:54.516814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.919 [2024-11-26 04:18:54.516840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.919 [2024-11-26 04:18:54.520283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.919 [2024-11-26 04:18:54.520319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.919 [2024-11-26 04:18:54.520346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.919 [2024-11-26 04:18:54.523745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.919 [2024-11-26 04:18:54.523795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.919 [2024-11-26 04:18:54.523822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.919 [2024-11-26 04:18:54.527765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.919 [2024-11-26 04:18:54.527801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.919 [2024-11-26 04:18:54.527828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.919 [2024-11-26 04:18:54.531012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.919 [2024-11-26 04:18:54.531050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.919 [2024-11-26 04:18:54.531077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.919 [2024-11-26 04:18:54.534135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fedd10) 00:22:52.919 [2024-11-26 04:18:54.534189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.919 [2024-11-26 04:18:54.534215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.919 00:22:52.919 Latency(us) 00:22:52.919 [2024-11-26T04:18:54.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.919 [2024-11-26T04:18:54.687Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:52.919 nvme0n1 : 2.00 8241.52 1030.19 0.00 0.00 1938.09 592.06 8400.52 00:22:52.919 [2024-11-26T04:18:54.687Z] =================================================================================================================== 00:22:52.919 [2024-11-26T04:18:54.687Z] Total : 8241.52 1030.19 0.00 0.00 1938.09 592.06 8400.52 00:22:52.919 0 00:22:52.919 04:18:54 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:52.919 04:18:54 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:52.919 04:18:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:52.919 04:18:54 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:52.919 | .driver_specific 00:22:52.919 | .nvme_error 00:22:52.919 | .status_code 00:22:52.919 | .command_transient_transport_error' 00:22:53.177 04:18:54 -- host/digest.sh@71 -- # (( 532 > 0 )) 00:22:53.177 04:18:54 -- host/digest.sh@73 -- # killprocess 97914 00:22:53.177 04:18:54 -- common/autotest_common.sh@936 -- # '[' -z 97914 ']' 00:22:53.177 04:18:54 -- common/autotest_common.sh@940 -- # kill -0 97914 00:22:53.177 04:18:54 -- common/autotest_common.sh@941 -- # uname 00:22:53.177 04:18:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:53.177 04:18:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97914 00:22:53.177 04:18:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:53.177 04:18:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:53.177 04:18:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97914' 00:22:53.177 killing process with pid 97914 00:22:53.177 04:18:54 -- common/autotest_common.sh@955 -- # kill 97914 00:22:53.177 Received shutdown signal, test time was about 2.000000 seconds 00:22:53.177 00:22:53.177 Latency(us) 00:22:53.177 [2024-11-26T04:18:54.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.177 [2024-11-26T04:18:54.945Z] =================================================================================================================== 00:22:53.177 [2024-11-26T04:18:54.945Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:53.177 04:18:54 -- common/autotest_common.sh@960 -- # wait 97914 00:22:53.436 04:18:55 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:22:53.436 04:18:55 -- host/digest.sh@54 -- # local rw bs qd 00:22:53.436 04:18:55 -- host/digest.sh@56 -- # rw=randwrite 00:22:53.436 04:18:55 -- host/digest.sh@56 -- # bs=4096 00:22:53.436 04:18:55 -- host/digest.sh@56 -- # qd=128 00:22:53.436 04:18:55 -- host/digest.sh@58 -- # bperfpid=97999 00:22:53.436 04:18:55 -- host/digest.sh@60 -- # waitforlisten 97999 /var/tmp/bperf.sock 00:22:53.436 04:18:55 -- common/autotest_common.sh@829 -- # '[' -z 97999 ']' 00:22:53.436 04:18:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:53.436 04:18:55 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:53.436 04:18:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:53.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:53.436 04:18:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:53.436 04:18:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:53.436 04:18:55 -- common/autotest_common.sh@10 -- # set +x 00:22:53.436 [2024-11-26 04:18:55.135626] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:53.436 [2024-11-26 04:18:55.135760] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97999 ] 00:22:53.695 [2024-11-26 04:18:55.275577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.695 [2024-11-26 04:18:55.339082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.630 04:18:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:54.630 04:18:56 -- common/autotest_common.sh@862 -- # return 0 00:22:54.630 04:18:56 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:54.630 04:18:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:54.630 04:18:56 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:54.631 04:18:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.631 04:18:56 -- common/autotest_common.sh@10 -- # set +x 00:22:54.631 04:18:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.631 04:18:56 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:54.631 04:18:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:54.889 nvme0n1 00:22:54.890 04:18:56 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:54.890 04:18:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.890 04:18:56 -- common/autotest_common.sh@10 -- # set +x 00:22:55.148 04:18:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.148 04:18:56 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:55.148 04:18:56 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:55.148 Running I/O for 2 seconds... 00:22:55.148 [2024-11-26 04:18:56.796745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190eea00 00:22:55.148 [2024-11-26 04:18:56.796966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.149 [2024-11-26 04:18:56.796997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.149 [2024-11-26 04:18:56.805527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190eb328 00:22:55.149 [2024-11-26 04:18:56.805711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.149 [2024-11-26 04:18:56.805732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:55.149 [2024-11-26 04:18:56.814480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190eff18 00:22:55.149 [2024-11-26 04:18:56.814910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.149 [2024-11-26 04:18:56.814947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:55.149 [2024-11-26 04:18:56.823592] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ebb98 00:22:55.149 [2024-11-26 04:18:56.824899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.149 [2024-11-26 04:18:56.824948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.149 [2024-11-26 04:18:56.832926] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e38d0 00:22:55.149 [2024-11-26 04:18:56.833421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.149 [2024-11-26 04:18:56.833454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:55.149 [2024-11-26 04:18:56.841814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190de470 00:22:55.149 [2024-11-26 04:18:56.842090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.149 [2024-11-26 04:18:56.842140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:55.149 [2024-11-26 04:18:56.850748] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e8088 00:22:55.149 [2024-11-26 04:18:56.852128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.149 [2024-11-26 04:18:56.852160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:55.149 [2024-11-26 04:18:56.859338] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e99d8 00:22:55.149 [2024-11-26 04:18:56.860396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.149 [2024-11-26 04:18:56.860428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:55.149 [2024-11-26 04:18:56.868677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e8d30 00:22:55.149 [2024-11-26 04:18:56.869274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.149 [2024-11-26 04:18:56.869334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:55.149 [2024-11-26 04:18:56.876337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190fd208 00:22:55.149 [2024-11-26 04:18:56.876461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.149 [2024-11-26 04:18:56.876480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:55.149 [2024-11-26 04:18:56.887192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f3e60 00:22:55.149 [2024-11-26 04:18:56.887708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.149 [2024-11-26 04:18:56.887752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:55.149 [2024-11-26 04:18:56.896033] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f1430 00:22:55.149 [2024-11-26 04:18:56.896775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.149 [2024-11-26 04:18:56.896831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:55.149 [2024-11-26 04:18:56.903523] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ef270 00:22:55.149 [2024-11-26 04:18:56.903670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.149 [2024-11-26 04:18:56.903689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:55.408 [2024-11-26 04:18:56.913082] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e6300 00:22:55.409 [2024-11-26 04:18:56.914253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:56.914304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:56.924219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ed0b0 00:22:55.409 [2024-11-26 04:18:56.924893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:56.924940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:56.933365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190fb048 00:22:55.409 [2024-11-26 04:18:56.934401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:56.934435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:56.940795] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f7970 00:22:55.409 [2024-11-26 04:18:56.941326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:56.941360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:56.951604] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f1868 00:22:55.409 [2024-11-26 04:18:56.952293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:56.952338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:56.959306] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e6fa8 00:22:55.409 [2024-11-26 04:18:56.960394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:56.960425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:56.967568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190fd640 00:22:55.409 [2024-11-26 04:18:56.968519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:56.968551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:56.976703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f4b08 00:22:55.409 [2024-11-26 04:18:56.977036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:56.977066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:56.985602] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e9168 00:22:55.409 [2024-11-26 04:18:56.986440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:56.986505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:56.994569] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e84c0 00:22:55.409 [2024-11-26 04:18:56.995631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:56.995661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:57.003235] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190fe720 00:22:55.409 [2024-11-26 04:18:57.004560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:57.004591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:57.012845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190fb480 00:22:55.409 [2024-11-26 04:18:57.013486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:57.013547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:57.021881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f6cc8 00:22:55.409 [2024-11-26 04:18:57.022903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:57.022934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:57.030870] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e7818 00:22:55.409 [2024-11-26 04:18:57.032326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:57.032357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:57.039993] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190df550 00:22:55.409 [2024-11-26 04:18:57.040653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:57.040736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:57.047721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e7c50 00:22:55.409 [2024-11-26 04:18:57.048774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:57.048829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:57.055956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ecc78 00:22:55.409 [2024-11-26 04:18:57.056908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:57.056953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:57.065266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f0350 00:22:55.409 [2024-11-26 04:18:57.066083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:57.066163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:57.075173] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f0350 00:22:55.409 [2024-11-26 04:18:57.076558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:57.076588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:57.083950] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190df118 00:22:55.409 [2024-11-26 04:18:57.084925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:57.084971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:57.092866] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e6738 00:22:55.409 [2024-11-26 04:18:57.093970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:57.094022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:57.101876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ea680 00:22:55.409 [2024-11-26 04:18:57.103435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:57.103466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:57.110648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e88f8 00:22:55.409 [2024-11-26 04:18:57.111587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:57.111617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:57.119203] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f5378 00:22:55.409 [2024-11-26 04:18:57.120284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:57.120313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:57.128638] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e4578 00:22:55.409 [2024-11-26 04:18:57.129250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:57.129308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:57.135879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f92c0 00:22:55.409 [2024-11-26 04:18:57.136956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:57.137001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:55.409 [2024-11-26 04:18:57.145147] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e8088 00:22:55.409 [2024-11-26 04:18:57.145452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.409 [2024-11-26 04:18:57.145482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:55.410 [2024-11-26 04:18:57.153840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f5378 00:22:55.410 [2024-11-26 04:18:57.155297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.410 [2024-11-26 04:18:57.155344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:55.410 [2024-11-26 04:18:57.163019] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190fda78 00:22:55.410 [2024-11-26 04:18:57.164194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.410 [2024-11-26 04:18:57.164242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:55.669 [2024-11-26 04:18:57.173230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e5ec8 00:22:55.669 [2024-11-26 04:18:57.174261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.669 [2024-11-26 04:18:57.174328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.669 [2024-11-26 04:18:57.182621] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f0ff8 00:22:55.669 [2024-11-26 04:18:57.183593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.669 [2024-11-26 04:18:57.183624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.669 [2024-11-26 04:18:57.192954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190eff18 00:22:55.669 [2024-11-26 04:18:57.193737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.669 [2024-11-26 04:18:57.193789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:55.669 [2024-11-26 04:18:57.201380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f57b0 00:22:55.669 [2024-11-26 04:18:57.202927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.669 [2024-11-26 04:18:57.202976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.669 [2024-11-26 04:18:57.210572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e7818 00:22:55.669 [2024-11-26 04:18:57.211020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.211054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.670 [2024-11-26 04:18:57.219396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f4298 00:22:55.670 [2024-11-26 04:18:57.220013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.220045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:55.670 [2024-11-26 04:18:57.228134] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e1710 00:22:55.670 [2024-11-26 04:18:57.228694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.228751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:55.670 [2024-11-26 04:18:57.236914] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190fbcf0 00:22:55.670 [2024-11-26 04:18:57.237465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.237511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:55.670 [2024-11-26 04:18:57.245692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ec408 00:22:55.670 [2024-11-26 04:18:57.246247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.246298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:55.670 [2024-11-26 04:18:57.254470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e88f8 00:22:55.670 [2024-11-26 04:18:57.254973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.255009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:55.670 [2024-11-26 04:18:57.263221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e7c50 00:22:55.670 [2024-11-26 04:18:57.263685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.263730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:55.670 [2024-11-26 04:18:57.271949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e88f8 00:22:55.670 [2024-11-26 04:18:57.272471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.272519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:55.670 [2024-11-26 04:18:57.279741] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e38d0 00:22:55.670 [2024-11-26 04:18:57.279879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.279898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:55.670 [2024-11-26 04:18:57.290562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e6300 00:22:55.670 [2024-11-26 04:18:57.291107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.291154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:55.670 [2024-11-26 04:18:57.299459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190fa3a0 00:22:55.670 [2024-11-26 04:18:57.300195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.300241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:55.670 [2024-11-26 04:18:57.308217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e1710 00:22:55.670 [2024-11-26 04:18:57.308936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.308981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:55.670 [2024-11-26 04:18:57.316960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e9e10 00:22:55.670 [2024-11-26 04:18:57.317636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.317682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:55.670 [2024-11-26 04:18:57.325693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e99d8 00:22:55.670 [2024-11-26 04:18:57.326398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.326446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:55.670 [2024-11-26 04:18:57.334470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ea248 00:22:55.670 [2024-11-26 04:18:57.335213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.335259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:55.670 [2024-11-26 04:18:57.342672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190fc128 00:22:55.670 [2024-11-26 04:18:57.343137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.343173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:55.670 [2024-11-26 04:18:57.351214] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f81e0 00:22:55.670 [2024-11-26 04:18:57.352178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.352208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:55.670 [2024-11-26 04:18:57.359817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f81e0 00:22:55.670 [2024-11-26 04:18:57.361018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.361048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:55.670 [2024-11-26 04:18:57.370588] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e12d8 00:22:55.670 [2024-11-26 04:18:57.371178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.371213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:55.670 [2024-11-26 04:18:57.378241] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f5be8 00:22:55.670 [2024-11-26 04:18:57.379343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.379373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:55.670 [2024-11-26 04:18:57.386941] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f8e88 00:22:55.670 [2024-11-26 04:18:57.387141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.387160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.670 [2024-11-26 04:18:57.395662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f9f68 00:22:55.670 [2024-11-26 04:18:57.396854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.396884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.670 [2024-11-26 04:18:57.406534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190fcdd0 00:22:55.670 [2024-11-26 04:18:57.407246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.407290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:55.670 [2024-11-26 04:18:57.414508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f0350 00:22:55.670 [2024-11-26 04:18:57.415608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.415639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:55.670 [2024-11-26 04:18:57.424468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190fef90 00:22:55.670 [2024-11-26 04:18:57.425071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.670 [2024-11-26 04:18:57.425104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:55.930 [2024-11-26 04:18:57.433192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190dfdc0 00:22:55.930 [2024-11-26 04:18:57.434534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.930 [2024-11-26 04:18:57.434569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:55.930 [2024-11-26 04:18:57.444306] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f46d0 00:22:55.930 [2024-11-26 04:18:57.445049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.930 [2024-11-26 04:18:57.445094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:55.930 [2024-11-26 04:18:57.452877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f8618 00:22:55.930 [2024-11-26 04:18:57.454282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.930 [2024-11-26 04:18:57.454360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:55.930 [2024-11-26 04:18:57.462013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190fbcf0 00:22:55.930 [2024-11-26 04:18:57.462416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.930 [2024-11-26 04:18:57.462450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:55.930 [2024-11-26 04:18:57.470937] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f6cc8 00:22:55.930 [2024-11-26 04:18:57.471487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.930 [2024-11-26 04:18:57.471534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:55.930 [2024-11-26 04:18:57.479812] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e95a0 00:22:55.930 [2024-11-26 04:18:57.480334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.930 [2024-11-26 04:18:57.480381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:55.930 [2024-11-26 04:18:57.488549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190eb760 00:22:55.930 [2024-11-26 04:18:57.489060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.930 [2024-11-26 04:18:57.489095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:55.930 [2024-11-26 04:18:57.497277] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f0788 00:22:55.930 [2024-11-26 04:18:57.497772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.930 [2024-11-26 04:18:57.497817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:55.930 [2024-11-26 04:18:57.506017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e7c50 00:22:55.930 [2024-11-26 04:18:57.506497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.930 [2024-11-26 04:18:57.506531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:55.930 [2024-11-26 04:18:57.515030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190fb048 00:22:55.930 [2024-11-26 04:18:57.515954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.930 [2024-11-26 04:18:57.515984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:55.930 [2024-11-26 04:18:57.523500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190efae0 00:22:55.930 [2024-11-26 04:18:57.524139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.930 [2024-11-26 04:18:57.524215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:55.930 [2024-11-26 04:18:57.532251] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ed4e8 00:22:55.930 [2024-11-26 04:18:57.533565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.930 [2024-11-26 04:18:57.533595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:55.930 [2024-11-26 04:18:57.540884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f2d80 00:22:55.930 [2024-11-26 04:18:57.541962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.930 [2024-11-26 04:18:57.542029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:55.930 [2024-11-26 04:18:57.549990] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e6300 00:22:55.930 [2024-11-26 04:18:57.550330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.930 [2024-11-26 04:18:57.550366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:55.930 [2024-11-26 04:18:57.558899] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190fb480 00:22:55.930 [2024-11-26 04:18:57.559423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.930 [2024-11-26 04:18:57.559458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:55.930 [2024-11-26 04:18:57.567922] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ebfd0 00:22:55.930 [2024-11-26 04:18:57.569619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.930 [2024-11-26 04:18:57.569665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:55.930 [2024-11-26 04:18:57.578856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190fdeb0 00:22:55.930 [2024-11-26 04:18:57.579773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.930 [2024-11-26 04:18:57.579844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:55.930 [2024-11-26 04:18:57.585648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f4b08 00:22:55.930 [2024-11-26 04:18:57.585846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.930 [2024-11-26 04:18:57.585865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:55.930 [2024-11-26 04:18:57.596935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f6458 00:22:55.930 [2024-11-26 04:18:57.597514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.930 [2024-11-26 04:18:57.597561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:55.930 [2024-11-26 04:18:57.604581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e1710 00:22:55.930 [2024-11-26 04:18:57.605782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.930 [2024-11-26 04:18:57.605838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:55.930 [2024-11-26 04:18:57.613076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f96f8 00:22:55.931 [2024-11-26 04:18:57.613949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.931 [2024-11-26 04:18:57.614015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.931 [2024-11-26 04:18:57.622527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ecc78 00:22:55.931 [2024-11-26 04:18:57.622980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.931 [2024-11-26 04:18:57.623015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:55.931 [2024-11-26 04:18:57.632518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f7538 00:22:55.931 [2024-11-26 04:18:57.633408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.931 [2024-11-26 04:18:57.633454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.931 [2024-11-26 04:18:57.640568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190fb480 00:22:55.931 [2024-11-26 04:18:57.640900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.931 [2024-11-26 04:18:57.640939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:55.931 [2024-11-26 04:18:57.649454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f8618 00:22:55.931 [2024-11-26 04:18:57.650948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.931 [2024-11-26 04:18:57.651000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:55.931 [2024-11-26 04:18:57.658148] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190df550 00:22:55.931 [2024-11-26 04:18:57.659461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.931 [2024-11-26 04:18:57.659508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:55.931 [2024-11-26 04:18:57.667873] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f2948 00:22:55.931 [2024-11-26 04:18:57.668735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.931 [2024-11-26 04:18:57.668787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:55.931 [2024-11-26 04:18:57.676000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e5ec8 00:22:55.931 [2024-11-26 04:18:57.676755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.931 [2024-11-26 04:18:57.676809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:55.931 [2024-11-26 04:18:57.684854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190df988 00:22:55.931 [2024-11-26 04:18:57.686071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.931 [2024-11-26 04:18:57.686120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:56.190 [2024-11-26 04:18:57.694276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f4298 00:22:56.190 [2024-11-26 04:18:57.695552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.190 [2024-11-26 04:18:57.695597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:56.190 [2024-11-26 04:18:57.703655] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f1ca0 00:22:56.190 [2024-11-26 04:18:57.704642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.190 [2024-11-26 04:18:57.704687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:56.190 [2024-11-26 04:18:57.712883] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e1710 00:22:56.191 [2024-11-26 04:18:57.713182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.713214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.721764] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f0350 00:22:56.191 [2024-11-26 04:18:57.722061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.722143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.730545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e5ec8 00:22:56.191 [2024-11-26 04:18:57.731094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.731143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.740083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ec840 00:22:56.191 [2024-11-26 04:18:57.741252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.741312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.750203] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190fdeb0 00:22:56.191 [2024-11-26 04:18:57.750898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.750945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.757828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190df550 00:22:56.191 [2024-11-26 04:18:57.758967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.759014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.766571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e5a90 00:22:56.191 [2024-11-26 04:18:57.768023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.768069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.775472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ef270 00:22:56.191 [2024-11-26 04:18:57.775973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.776007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.785652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190df988 00:22:56.191 [2024-11-26 04:18:57.787203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.787252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.793475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e3d08 00:22:56.191 [2024-11-26 04:18:57.794601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.794650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.801877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190de038 00:22:56.191 [2024-11-26 04:18:57.802489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.802548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.811242] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f6020 00:22:56.191 [2024-11-26 04:18:57.811623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.811656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.820675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e2c28 00:22:56.191 [2024-11-26 04:18:57.821532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.821575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.829749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e49b0 00:22:56.191 [2024-11-26 04:18:57.831053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.831104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.838934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f4298 00:22:56.191 [2024-11-26 04:18:57.839318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.839352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.847776] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ec840 00:22:56.191 [2024-11-26 04:18:57.848354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.848388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.855429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e6fa8 00:22:56.191 [2024-11-26 04:18:57.855592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.855611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.866376] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f0788 00:22:56.191 [2024-11-26 04:18:57.866950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.866984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.875219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e5ec8 00:22:56.191 [2024-11-26 04:18:57.875986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.876032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.883955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ff3c8 00:22:56.191 [2024-11-26 04:18:57.884674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.884741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.892764] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f7da8 00:22:56.191 [2024-11-26 04:18:57.893457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.893503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.901508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ed0b0 00:22:56.191 [2024-11-26 04:18:57.902180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.902258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.910268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ec408 00:22:56.191 [2024-11-26 04:18:57.910942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.910986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.919054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190de8a8 00:22:56.191 [2024-11-26 04:18:57.919703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.919773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.927838] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e5a90 00:22:56.191 [2024-11-26 04:18:57.928532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.928578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.935544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f6890 00:22:56.191 [2024-11-26 04:18:57.935821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.935871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:56.191 [2024-11-26 04:18:57.945214] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e4de8 00:22:56.191 [2024-11-26 04:18:57.945608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.191 [2024-11-26 04:18:57.945642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:56.451 [2024-11-26 04:18:57.954891] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e23b8 00:22:56.451 [2024-11-26 04:18:57.956225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.451 [2024-11-26 04:18:57.956265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:56.451 [2024-11-26 04:18:57.963828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f20d8 00:22:56.451 [2024-11-26 04:18:57.964522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.451 [2024-11-26 04:18:57.964568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:56.451 [2024-11-26 04:18:57.972879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f5378 00:22:56.451 [2024-11-26 04:18:57.973693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.451 [2024-11-26 04:18:57.973762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:56.451 [2024-11-26 04:18:57.981577] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190dece0 00:22:56.451 [2024-11-26 04:18:57.982451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.451 [2024-11-26 04:18:57.982499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:56.451 [2024-11-26 04:18:57.989888] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190fd640 00:22:56.451 [2024-11-26 04:18:57.990695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.451 [2024-11-26 04:18:57.990780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:56.451 [2024-11-26 04:18:58.000006] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e84c0 00:22:56.451 [2024-11-26 04:18:58.001245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.451 [2024-11-26 04:18:58.001276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:56.451 [2024-11-26 04:18:58.008874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190fda78 00:22:56.451 [2024-11-26 04:18:58.009783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.451 [2024-11-26 04:18:58.009821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:56.451 [2024-11-26 04:18:58.017830] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e3498 00:22:56.451 [2024-11-26 04:18:58.018927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.451 [2024-11-26 04:18:58.018971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:56.451 [2024-11-26 04:18:58.026537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f2d80 00:22:56.451 [2024-11-26 04:18:58.028000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.451 [2024-11-26 04:18:58.028045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:56.452 [2024-11-26 04:18:58.036224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e3d08 00:22:56.452 [2024-11-26 04:18:58.037079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.452 [2024-11-26 04:18:58.037108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:56.452 [2024-11-26 04:18:58.043992] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e4de8 00:22:56.452 [2024-11-26 04:18:58.045079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.452 [2024-11-26 04:18:58.045108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:56.452 [2024-11-26 04:18:58.053432] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e3498 00:22:56.452 [2024-11-26 04:18:58.054073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.452 [2024-11-26 04:18:58.054146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:56.452 [2024-11-26 04:18:58.062166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f46d0 00:22:56.452 [2024-11-26 04:18:58.062790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.452 [2024-11-26 04:18:58.062872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:56.452 [2024-11-26 04:18:58.069856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f8618 00:22:56.452 [2024-11-26 04:18:58.070806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.452 [2024-11-26 04:18:58.070866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:56.452 [2024-11-26 04:18:58.078686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e1710 00:22:56.452 [2024-11-26 04:18:58.079792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.452 [2024-11-26 04:18:58.079829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:56.452 [2024-11-26 04:18:58.087483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190eff18 00:22:56.452 [2024-11-26 04:18:58.087830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.452 [2024-11-26 04:18:58.087868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:56.452 [2024-11-26 04:18:58.097349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ff3c8 00:22:56.452 [2024-11-26 04:18:58.098082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.452 [2024-11-26 04:18:58.098130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:56.452 [2024-11-26 04:18:58.104786] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f6890 00:22:56.452 [2024-11-26 04:18:58.105699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.452 [2024-11-26 04:18:58.105737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:56.452 [2024-11-26 04:18:58.115961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e0ea0 00:22:56.452 [2024-11-26 04:18:58.116675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.452 [2024-11-26 04:18:58.116743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:56.452 [2024-11-26 04:18:58.124348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190fd208 00:22:56.452 [2024-11-26 04:18:58.125718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.452 [2024-11-26 04:18:58.125756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:56.452 [2024-11-26 04:18:58.133583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f4f40 00:22:56.452 [2024-11-26 04:18:58.134103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.452 [2024-11-26 04:18:58.134139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:56.452 [2024-11-26 04:18:58.141948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ef270 00:22:56.452 [2024-11-26 04:18:58.142158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.452 [2024-11-26 04:18:58.142177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:56.452 [2024-11-26 04:18:58.150908] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f2948 00:22:56.452 [2024-11-26 04:18:58.152197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.452 [2024-11-26 04:18:58.152226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:56.452 [2024-11-26 04:18:58.160457] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e38d0 00:22:56.452 [2024-11-26 04:18:58.160983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.452 [2024-11-26 04:18:58.161017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:56.452 [2024-11-26 04:18:58.170345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190eaab8 00:22:56.452 [2024-11-26 04:18:58.171023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.452 [2024-11-26 04:18:58.171068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:56.452 [2024-11-26 04:18:58.178162] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190eb760 00:22:56.452 [2024-11-26 04:18:58.179431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.452 [2024-11-26 04:18:58.179475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:56.452 [2024-11-26 04:18:58.186602] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f7da8 00:22:56.452 [2024-11-26 04:18:58.187699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.452 [2024-11-26 04:18:58.187766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:56.452 [2024-11-26 04:18:58.198374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190edd58 00:22:56.452 [2024-11-26 04:18:58.199290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.452 [2024-11-26 04:18:58.199333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:56.452 [2024-11-26 04:18:58.205216] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e88f8 00:22:56.452 [2024-11-26 04:18:58.205347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.452 [2024-11-26 04:18:58.205366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:56.712 [2024-11-26 04:18:58.216089] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f3a28 00:22:56.712 [2024-11-26 04:18:58.217416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.712 [2024-11-26 04:18:58.217462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:56.712 [2024-11-26 04:18:58.227020] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e12d8 00:22:56.712 [2024-11-26 04:18:58.227666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.712 [2024-11-26 04:18:58.227732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:56.712 [2024-11-26 04:18:58.235162] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f2948 00:22:56.712 [2024-11-26 04:18:58.236464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.712 [2024-11-26 04:18:58.236494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:56.712 [2024-11-26 04:18:58.244176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ff3c8 00:22:56.712 [2024-11-26 04:18:58.244690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.712 [2024-11-26 04:18:58.244762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:56.712 [2024-11-26 04:18:58.252761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f1430 00:22:56.712 [2024-11-26 04:18:58.253796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.712 [2024-11-26 04:18:58.253836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:56.712 [2024-11-26 04:18:58.261407] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f1430 00:22:56.712 [2024-11-26 04:18:58.262720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.712 [2024-11-26 04:18:58.262779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:56.712 [2024-11-26 04:18:58.272289] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e6300 00:22:56.712 [2024-11-26 04:18:58.273077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.712 [2024-11-26 04:18:58.273108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:56.712 [2024-11-26 04:18:58.280827] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190fda78 00:22:56.712 [2024-11-26 04:18:58.282155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.712 [2024-11-26 04:18:58.282205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:56.712 [2024-11-26 04:18:58.289905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f2948 00:22:56.712 [2024-11-26 04:18:58.290250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.712 [2024-11-26 04:18:58.290281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:56.712 [2024-11-26 04:18:58.298879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f46d0 00:22:56.712 [2024-11-26 04:18:58.299368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.712 [2024-11-26 04:18:58.299403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.712 [2024-11-26 04:18:58.307607] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190eee38 00:22:56.712 [2024-11-26 04:18:58.308102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.712 [2024-11-26 04:18:58.308136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:56.712 [2024-11-26 04:18:58.316405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f31b8 00:22:56.712 [2024-11-26 04:18:58.316899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.712 [2024-11-26 04:18:58.316933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:56.712 [2024-11-26 04:18:58.325252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ea248 00:22:56.712 [2024-11-26 04:18:58.325660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.712 [2024-11-26 04:18:58.325689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:56.712 [2024-11-26 04:18:58.335991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190eea00 00:22:56.712 [2024-11-26 04:18:58.336681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.712 [2024-11-26 04:18:58.336742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:56.712 [2024-11-26 04:18:58.343849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e5a90 00:22:56.712 [2024-11-26 04:18:58.344148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.712 [2024-11-26 04:18:58.344188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.712 [2024-11-26 04:18:58.353545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190fa7d8 00:22:56.712 [2024-11-26 04:18:58.354469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.713 [2024-11-26 04:18:58.354519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:56.713 [2024-11-26 04:18:58.364185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f6890 00:22:56.713 [2024-11-26 04:18:58.364862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.713 [2024-11-26 04:18:58.364907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.713 [2024-11-26 04:18:58.371903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190de8a8 00:22:56.713 [2024-11-26 04:18:58.373044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.713 [2024-11-26 04:18:58.373073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.713 [2024-11-26 04:18:58.380221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ff3c8 00:22:56.713 [2024-11-26 04:18:58.381362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.713 [2024-11-26 04:18:58.381395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:56.713 [2024-11-26 04:18:58.390826] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e88f8 00:22:56.713 [2024-11-26 04:18:58.391501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.713 [2024-11-26 04:18:58.391549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.713 [2024-11-26 04:18:58.398501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f81e0 00:22:56.713 [2024-11-26 04:18:58.399619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.713 [2024-11-26 04:18:58.399650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.713 [2024-11-26 04:18:58.407159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f7da8 00:22:56.713 [2024-11-26 04:18:58.407467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.713 [2024-11-26 04:18:58.407516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:56.713 [2024-11-26 04:18:58.416038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e5a90 00:22:56.713 [2024-11-26 04:18:58.416535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.713 [2024-11-26 04:18:58.416571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:56.713 [2024-11-26 04:18:58.425015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190fac10 00:22:56.713 [2024-11-26 04:18:58.426111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.713 [2024-11-26 04:18:58.426160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:56.713 [2024-11-26 04:18:58.433740] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f0788 00:22:56.713 [2024-11-26 04:18:58.435200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.713 [2024-11-26 04:18:58.435234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:56.713 [2024-11-26 04:18:58.443485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e0ea0 00:22:56.713 [2024-11-26 04:18:58.444656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.713 [2024-11-26 04:18:58.444685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:56.713 [2024-11-26 04:18:58.452196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ec408 00:22:56.713 [2024-11-26 04:18:58.453249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.713 [2024-11-26 04:18:58.453280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:56.713 [2024-11-26 04:18:58.460786] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ee5c8 00:22:56.713 [2024-11-26 04:18:58.462333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.713 [2024-11-26 04:18:58.462380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:56.713 [2024-11-26 04:18:58.469579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f5378 00:22:56.713 [2024-11-26 04:18:58.471036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.713 [2024-11-26 04:18:58.471085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.479564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f1430 00:22:56.973 [2024-11-26 04:18:58.480676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.480706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.488133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e5658 00:22:56.973 [2024-11-26 04:18:58.489085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.489116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.496776] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190eee38 00:22:56.973 [2024-11-26 04:18:58.496869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.496889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.505740] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e5658 00:22:56.973 [2024-11-26 04:18:58.506654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.506686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.516733] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f6458 00:22:56.973 [2024-11-26 04:18:58.517423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.517470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.525116] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e4578 00:22:56.973 [2024-11-26 04:18:58.526571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.526606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.534202] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e1710 00:22:56.973 [2024-11-26 04:18:58.534566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.534599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.543050] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e88f8 00:22:56.973 [2024-11-26 04:18:58.543583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.543631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.551997] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ec408 00:22:56.973 [2024-11-26 04:18:58.553481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.553513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.561150] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f96f8 00:22:56.973 [2024-11-26 04:18:58.561832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.561879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.568501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e84c0 00:22:56.973 [2024-11-26 04:18:58.569634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.569665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.579450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190de470 00:22:56.973 [2024-11-26 04:18:58.580054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.580086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.587874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e6738 00:22:56.973 [2024-11-26 04:18:58.589095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.589124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.596904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f8618 00:22:56.973 [2024-11-26 04:18:58.597148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.597221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.605754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e23b8 00:22:56.973 [2024-11-26 04:18:58.606233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.606270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.614488] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f4f40 00:22:56.973 [2024-11-26 04:18:58.614947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.614981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.623264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e8088 00:22:56.973 [2024-11-26 04:18:58.623656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.623689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.632036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190eee38 00:22:56.973 [2024-11-26 04:18:58.632402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.632436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.640760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190dece0 00:22:56.973 [2024-11-26 04:18:58.641095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.641137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.649498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f92c0 00:22:56.973 [2024-11-26 04:18:58.649837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.649866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.658253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190dece0 00:22:56.973 [2024-11-26 04:18:58.658714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.658762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.667246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f8e88 00:22:56.973 [2024-11-26 04:18:58.667952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.667996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.676038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e6b70 00:22:56.973 [2024-11-26 04:18:58.676333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.676377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.684600] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e88f8 00:22:56.973 [2024-11-26 04:18:58.685030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.685065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.694934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190f9b30 00:22:56.973 [2024-11-26 04:18:58.696309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.973 [2024-11-26 04:18:58.696338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:56.973 [2024-11-26 04:18:58.703679] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190df118 00:22:56.973 [2024-11-26 04:18:58.704581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.974 [2024-11-26 04:18:58.704611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.974 [2024-11-26 04:18:58.711517] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190efae0 00:22:56.974 [2024-11-26 04:18:58.711849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.974 [2024-11-26 04:18:58.711883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:56.974 [2024-11-26 04:18:58.722732] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190df988 00:22:56.974 [2024-11-26 04:18:58.723568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.974 [2024-11-26 04:18:58.723596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:56.974 [2024-11-26 04:18:58.730624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190fa3a0 00:22:56.974 [2024-11-26 04:18:58.732028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.974 [2024-11-26 04:18:58.732058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:57.233 [2024-11-26 04:18:58.741094] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e9168 00:22:57.233 [2024-11-26 04:18:58.742558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.233 [2024-11-26 04:18:58.742605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:57.233 [2024-11-26 04:18:58.749401] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190feb58 00:22:57.233 [2024-11-26 04:18:58.750540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.233 [2024-11-26 04:18:58.750586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:57.233 [2024-11-26 04:18:58.759511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190eb328 00:22:57.233 [2024-11-26 04:18:58.760139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.233 [2024-11-26 04:18:58.760172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:57.233 [2024-11-26 04:18:58.767104] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190e4140 00:22:57.233 [2024-11-26 04:18:58.768269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.233 [2024-11-26 04:18:58.768314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:57.233 [2024-11-26 04:18:58.776056] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb0e0) with pdu=0x2000190ea248 00:22:57.233 [2024-11-26 04:18:58.776403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.233 [2024-11-26 04:18:58.776436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:57.233 00:22:57.233 Latency(us) 00:22:57.233 [2024-11-26T04:18:59.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.233 [2024-11-26T04:18:59.001Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:57.233 nvme0n1 : 2.01 28218.61 110.23 0.00 0.00 4531.10 1854.37 14596.65 00:22:57.233 [2024-11-26T04:18:59.001Z] =================================================================================================================== 00:22:57.233 [2024-11-26T04:18:59.001Z] Total : 28218.61 110.23 0.00 0.00 4531.10 1854.37 14596.65 00:22:57.233 0 00:22:57.233 04:18:58 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:57.233 04:18:58 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:57.233 04:18:58 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:57.233 | .driver_specific 00:22:57.233 | .nvme_error 00:22:57.233 | .status_code 00:22:57.233 | .command_transient_transport_error' 00:22:57.233 04:18:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:57.492 04:18:59 -- host/digest.sh@71 -- # (( 221 > 0 )) 00:22:57.492 04:18:59 -- host/digest.sh@73 -- # killprocess 97999 00:22:57.492 04:18:59 -- common/autotest_common.sh@936 -- # '[' -z 97999 ']' 00:22:57.492 04:18:59 -- common/autotest_common.sh@940 -- # kill -0 97999 00:22:57.492 04:18:59 -- common/autotest_common.sh@941 -- # uname 00:22:57.492 04:18:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:57.492 04:18:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97999 00:22:57.492 04:18:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:57.492 04:18:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:57.492 04:18:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97999' 00:22:57.492 killing process with pid 97999 00:22:57.492 04:18:59 -- common/autotest_common.sh@955 -- # kill 97999 00:22:57.492 Received shutdown signal, test time was about 2.000000 seconds 00:22:57.492 00:22:57.492 Latency(us) 00:22:57.492 [2024-11-26T04:18:59.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.492 [2024-11-26T04:18:59.260Z] =================================================================================================================== 00:22:57.492 [2024-11-26T04:18:59.260Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:57.492 04:18:59 -- common/autotest_common.sh@960 -- # wait 97999 00:22:57.750 04:18:59 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:22:57.750 04:18:59 -- host/digest.sh@54 -- # local rw bs qd 00:22:57.750 04:18:59 -- host/digest.sh@56 -- # rw=randwrite 00:22:57.750 04:18:59 -- host/digest.sh@56 -- # bs=131072 00:22:57.750 04:18:59 -- host/digest.sh@56 -- # qd=16 00:22:57.750 04:18:59 -- host/digest.sh@58 -- # bperfpid=98095 00:22:57.750 04:18:59 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:22:57.750 04:18:59 -- host/digest.sh@60 -- # waitforlisten 98095 /var/tmp/bperf.sock 00:22:57.750 04:18:59 -- common/autotest_common.sh@829 -- # '[' -z 98095 ']' 00:22:57.750 04:18:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:57.750 04:18:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:57.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:57.750 04:18:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:57.750 04:18:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:57.750 04:18:59 -- common/autotest_common.sh@10 -- # set +x 00:22:57.750 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:57.750 Zero copy mechanism will not be used. 00:22:57.750 [2024-11-26 04:18:59.376740] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:57.750 [2024-11-26 04:18:59.376825] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98095 ] 00:22:58.008 [2024-11-26 04:18:59.514214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.008 [2024-11-26 04:18:59.568309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.946 04:19:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:58.946 04:19:00 -- common/autotest_common.sh@862 -- # return 0 00:22:58.946 04:19:00 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:58.946 04:19:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:58.946 04:19:00 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:58.946 04:19:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.946 04:19:00 -- common/autotest_common.sh@10 -- # set +x 00:22:58.946 04:19:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.946 04:19:00 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:58.946 04:19:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:59.205 nvme0n1 00:22:59.205 04:19:00 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:59.205 04:19:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.205 04:19:00 -- common/autotest_common.sh@10 -- # set +x 00:22:59.466 04:19:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.466 04:19:00 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:59.466 04:19:00 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:59.466 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:59.466 Zero copy mechanism will not be used. 00:22:59.466 Running I/O for 2 seconds... 00:22:59.466 [2024-11-26 04:19:01.072206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.466 [2024-11-26 04:19:01.072602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.466 [2024-11-26 04:19:01.072644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.466 [2024-11-26 04:19:01.076315] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.466 [2024-11-26 04:19:01.076657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.466 [2024-11-26 04:19:01.076692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.466 [2024-11-26 04:19:01.080641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.466 [2024-11-26 04:19:01.080817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.466 [2024-11-26 04:19:01.080840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.466 [2024-11-26 04:19:01.084686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.466 [2024-11-26 04:19:01.084841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.466 [2024-11-26 04:19:01.084862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.466 [2024-11-26 04:19:01.088757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.466 [2024-11-26 04:19:01.088859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.466 [2024-11-26 04:19:01.088880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.466 [2024-11-26 04:19:01.092790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.466 [2024-11-26 04:19:01.092871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.466 [2024-11-26 04:19:01.092891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.466 [2024-11-26 04:19:01.096898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.466 [2024-11-26 04:19:01.097049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.466 [2024-11-26 04:19:01.097070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.466 [2024-11-26 04:19:01.101015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.466 [2024-11-26 04:19:01.101179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.466 [2024-11-26 04:19:01.101200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.466 [2024-11-26 04:19:01.104922] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.466 [2024-11-26 04:19:01.105104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.466 [2024-11-26 04:19:01.105125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.466 [2024-11-26 04:19:01.108911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.466 [2024-11-26 04:19:01.109001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.466 [2024-11-26 04:19:01.109021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.466 [2024-11-26 04:19:01.112984] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.466 [2024-11-26 04:19:01.113107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.466 [2024-11-26 04:19:01.113128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.466 [2024-11-26 04:19:01.117000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.466 [2024-11-26 04:19:01.117082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.466 [2024-11-26 04:19:01.117102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.466 [2024-11-26 04:19:01.121044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.466 [2024-11-26 04:19:01.121134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.466 [2024-11-26 04:19:01.121154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.466 [2024-11-26 04:19:01.124996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.466 [2024-11-26 04:19:01.125133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.466 [2024-11-26 04:19:01.125154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.466 [2024-11-26 04:19:01.129094] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.466 [2024-11-26 04:19:01.129293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.466 [2024-11-26 04:19:01.129313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.466 [2024-11-26 04:19:01.133183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.466 [2024-11-26 04:19:01.133346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.466 [2024-11-26 04:19:01.133366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.466 [2024-11-26 04:19:01.137139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.466 [2024-11-26 04:19:01.137256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.466 [2024-11-26 04:19:01.137276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.466 [2024-11-26 04:19:01.141167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.466 [2024-11-26 04:19:01.141319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.466 [2024-11-26 04:19:01.141339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.466 [2024-11-26 04:19:01.145150] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.467 [2024-11-26 04:19:01.145303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.467 [2024-11-26 04:19:01.145323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.467 [2024-11-26 04:19:01.149204] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.467 [2024-11-26 04:19:01.149300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.467 [2024-11-26 04:19:01.149322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.467 [2024-11-26 04:19:01.153234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.467 [2024-11-26 04:19:01.153321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.467 [2024-11-26 04:19:01.153341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.467 [2024-11-26 04:19:01.157280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.467 [2024-11-26 04:19:01.157417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.467 [2024-11-26 04:19:01.157438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.467 [2024-11-26 04:19:01.161323] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.467 [2024-11-26 04:19:01.161540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.467 [2024-11-26 04:19:01.161561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.467 [2024-11-26 04:19:01.165467] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.467 [2024-11-26 04:19:01.165652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.467 [2024-11-26 04:19:01.165674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.467 [2024-11-26 04:19:01.169473] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.467 [2024-11-26 04:19:01.169650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.467 [2024-11-26 04:19:01.169670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.467 [2024-11-26 04:19:01.173479] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.467 [2024-11-26 04:19:01.173602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.467 [2024-11-26 04:19:01.173623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.467 [2024-11-26 04:19:01.177521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.467 [2024-11-26 04:19:01.177677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.467 [2024-11-26 04:19:01.177698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.467 [2024-11-26 04:19:01.181563] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.467 [2024-11-26 04:19:01.181657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.467 [2024-11-26 04:19:01.181679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.467 [2024-11-26 04:19:01.185524] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.467 [2024-11-26 04:19:01.185607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.467 [2024-11-26 04:19:01.185627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.467 [2024-11-26 04:19:01.189672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.467 [2024-11-26 04:19:01.189830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.467 [2024-11-26 04:19:01.189851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.467 [2024-11-26 04:19:01.193640] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.467 [2024-11-26 04:19:01.193867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.467 [2024-11-26 04:19:01.193888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.467 [2024-11-26 04:19:01.197741] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.467 [2024-11-26 04:19:01.197906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.467 [2024-11-26 04:19:01.197927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.467 [2024-11-26 04:19:01.201722] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.467 [2024-11-26 04:19:01.201890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.467 [2024-11-26 04:19:01.201911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.467 [2024-11-26 04:19:01.205734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.467 [2024-11-26 04:19:01.205858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.467 [2024-11-26 04:19:01.205879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.467 [2024-11-26 04:19:01.209699] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.467 [2024-11-26 04:19:01.209875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.467 [2024-11-26 04:19:01.209896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.467 [2024-11-26 04:19:01.213731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.467 [2024-11-26 04:19:01.213850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.467 [2024-11-26 04:19:01.213870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.467 [2024-11-26 04:19:01.217763] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.467 [2024-11-26 04:19:01.217858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.467 [2024-11-26 04:19:01.217879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.467 [2024-11-26 04:19:01.221857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.467 [2024-11-26 04:19:01.222029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.467 [2024-11-26 04:19:01.222051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.467 [2024-11-26 04:19:01.226378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.467 [2024-11-26 04:19:01.226531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.467 [2024-11-26 04:19:01.226551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.728 [2024-11-26 04:19:01.230627] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.728 [2024-11-26 04:19:01.230779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.728 [2024-11-26 04:19:01.230801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.728 [2024-11-26 04:19:01.235080] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.728 [2024-11-26 04:19:01.235302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.728 [2024-11-26 04:19:01.235323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.728 [2024-11-26 04:19:01.239191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.728 [2024-11-26 04:19:01.239346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.728 [2024-11-26 04:19:01.239367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.728 [2024-11-26 04:19:01.243381] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.728 [2024-11-26 04:19:01.243525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.728 [2024-11-26 04:19:01.243548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.728 [2024-11-26 04:19:01.247509] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.728 [2024-11-26 04:19:01.247636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.728 [2024-11-26 04:19:01.247667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.728 [2024-11-26 04:19:01.251647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.728 [2024-11-26 04:19:01.251810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.728 [2024-11-26 04:19:01.251833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.728 [2024-11-26 04:19:01.255856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.728 [2024-11-26 04:19:01.255999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.728 [2024-11-26 04:19:01.256037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.728 [2024-11-26 04:19:01.259925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.728 [2024-11-26 04:19:01.260235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.728 [2024-11-26 04:19:01.260274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.728 [2024-11-26 04:19:01.264052] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.728 [2024-11-26 04:19:01.264228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.728 [2024-11-26 04:19:01.264248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.728 [2024-11-26 04:19:01.268234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.728 [2024-11-26 04:19:01.268381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.728 [2024-11-26 04:19:01.268407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.728 [2024-11-26 04:19:01.272265] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.728 [2024-11-26 04:19:01.272366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.728 [2024-11-26 04:19:01.272387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.728 [2024-11-26 04:19:01.276372] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.728 [2024-11-26 04:19:01.276523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.728 [2024-11-26 04:19:01.276550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.728 [2024-11-26 04:19:01.280373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.728 [2024-11-26 04:19:01.280482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.728 [2024-11-26 04:19:01.280503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.728 [2024-11-26 04:19:01.284446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.728 [2024-11-26 04:19:01.284545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.728 [2024-11-26 04:19:01.284566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.288599] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.288796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.288817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.292670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.292999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.293033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.296594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.296727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.296776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.300796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.300990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.301038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.304687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.304872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.304893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.308670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.308809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.308830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.312642] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.312844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.312865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.316562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.316694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.316717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.320515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.320675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.320696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.324682] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.324828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.324848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.328721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.328829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.328851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.332721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.332893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.332914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.336667] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.336855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.336876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.340670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.340800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.340820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.344790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.344955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.344976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.348681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.348813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.348834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.352702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.352883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.352904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.356848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.356955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.356976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.360817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.360935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.360956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.364729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.364914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.364935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.368624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.368946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.368989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.372488] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.372590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.372611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.376492] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.376666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.376686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.380428] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.380622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.380643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.384476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.384602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.384623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.388648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.388806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.388827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.729 [2024-11-26 04:19:01.392570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.729 [2024-11-26 04:19:01.392681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.729 [2024-11-26 04:19:01.392702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.730 [2024-11-26 04:19:01.396561] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.730 [2024-11-26 04:19:01.396734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.730 [2024-11-26 04:19:01.396767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.730 [2024-11-26 04:19:01.400597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.730 [2024-11-26 04:19:01.400732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.730 [2024-11-26 04:19:01.400766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.730 [2024-11-26 04:19:01.404488] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.730 [2024-11-26 04:19:01.404592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.730 [2024-11-26 04:19:01.404614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.730 [2024-11-26 04:19:01.408572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.730 [2024-11-26 04:19:01.408743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.730 [2024-11-26 04:19:01.408777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.730 [2024-11-26 04:19:01.412506] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.730 [2024-11-26 04:19:01.412809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.730 [2024-11-26 04:19:01.412857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.730 [2024-11-26 04:19:01.416457] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.730 [2024-11-26 04:19:01.416576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.730 [2024-11-26 04:19:01.416596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.730 [2024-11-26 04:19:01.420590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.730 [2024-11-26 04:19:01.420766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.730 [2024-11-26 04:19:01.420788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.730 [2024-11-26 04:19:01.424557] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.730 [2024-11-26 04:19:01.424667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.730 [2024-11-26 04:19:01.424688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.730 [2024-11-26 04:19:01.428662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.730 [2024-11-26 04:19:01.428837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.730 [2024-11-26 04:19:01.428858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.730 [2024-11-26 04:19:01.432613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.730 [2024-11-26 04:19:01.432758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.730 [2024-11-26 04:19:01.432792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.730 [2024-11-26 04:19:01.436662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.730 [2024-11-26 04:19:01.436811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.730 [2024-11-26 04:19:01.436832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.730 [2024-11-26 04:19:01.440761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.730 [2024-11-26 04:19:01.440927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.730 [2024-11-26 04:19:01.440947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.730 [2024-11-26 04:19:01.444821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.730 [2024-11-26 04:19:01.445061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.730 [2024-11-26 04:19:01.445104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.730 [2024-11-26 04:19:01.449003] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.730 [2024-11-26 04:19:01.449170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.730 [2024-11-26 04:19:01.449189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.730 [2024-11-26 04:19:01.453021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.730 [2024-11-26 04:19:01.453202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.730 [2024-11-26 04:19:01.453237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.730 [2024-11-26 04:19:01.456954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.730 [2024-11-26 04:19:01.457071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.730 [2024-11-26 04:19:01.457091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.730 [2024-11-26 04:19:01.460963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.730 [2024-11-26 04:19:01.461109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.730 [2024-11-26 04:19:01.461129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.730 [2024-11-26 04:19:01.464995] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.730 [2024-11-26 04:19:01.465104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.730 [2024-11-26 04:19:01.465123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.730 [2024-11-26 04:19:01.468930] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.730 [2024-11-26 04:19:01.469045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.730 [2024-11-26 04:19:01.469066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.730 [2024-11-26 04:19:01.472911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.730 [2024-11-26 04:19:01.473089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.730 [2024-11-26 04:19:01.473124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.730 [2024-11-26 04:19:01.476882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.730 [2024-11-26 04:19:01.477170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.730 [2024-11-26 04:19:01.477208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.730 [2024-11-26 04:19:01.480804] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.730 [2024-11-26 04:19:01.480912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.730 [2024-11-26 04:19:01.480932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.730 [2024-11-26 04:19:01.484848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.730 [2024-11-26 04:19:01.485014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.730 [2024-11-26 04:19:01.485035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.991 [2024-11-26 04:19:01.489319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.991 [2024-11-26 04:19:01.489509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.991 [2024-11-26 04:19:01.489540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.991 [2024-11-26 04:19:01.493528] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.991 [2024-11-26 04:19:01.493719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.493740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.497938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.498120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.498141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.501926] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.502069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.502089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.506035] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.506191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.506211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.510079] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.510201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.510221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.514075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.514185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.514207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.518083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.518262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.518298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.522507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.522819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.522888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.526427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.526551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.526571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.530540] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.530705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.530740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.534538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.534637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.534658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.538557] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.538754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.538775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.542582] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.542692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.542712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.546507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.546599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.546618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.550524] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.550673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.550693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.554434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.554578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.554598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.558475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.558558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.558578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.562584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.562698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.562734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.566491] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.566578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.566598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.570536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.570671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.570691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.574466] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.574578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.574598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.578393] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.578501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.578521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.582389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.582541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.582561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.586301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.586541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.586560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.590327] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.590511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.590531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.594263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.594369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.594389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.598192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.992 [2024-11-26 04:19:01.598334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.992 [2024-11-26 04:19:01.598353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.992 [2024-11-26 04:19:01.602267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.602413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.602432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.606231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.606351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.606370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.610149] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.610268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.610303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.614172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.614350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.614370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.618117] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.618255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.618299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.622249] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.622366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.622394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.626282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.626411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.626430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.630171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.630275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.630311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.634167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.634337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.634357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.638193] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.638338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.638359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.642171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.642286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.642307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.646182] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.646373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.646393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.650105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.650408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.650434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.654091] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.654217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.654238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.658210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.658355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.658375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.662175] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.662264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.662284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.666178] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.666372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.666392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.670046] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.670218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.670239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.673958] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.674056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.674077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.677950] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.678127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.678148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.681951] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.682245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.682270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.685881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.685961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.685981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.689978] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.690133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.690154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.693934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.694097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.694118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.697899] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.698102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.698124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.701827] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.701934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.701953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.705796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.993 [2024-11-26 04:19:01.705910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.993 [2024-11-26 04:19:01.705930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.993 [2024-11-26 04:19:01.709915] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.994 [2024-11-26 04:19:01.710094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.994 [2024-11-26 04:19:01.710115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.994 [2024-11-26 04:19:01.713875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.994 [2024-11-26 04:19:01.714200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.994 [2024-11-26 04:19:01.714248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.994 [2024-11-26 04:19:01.717823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.994 [2024-11-26 04:19:01.717926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.994 [2024-11-26 04:19:01.717947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.994 [2024-11-26 04:19:01.721900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.994 [2024-11-26 04:19:01.722077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.994 [2024-11-26 04:19:01.722097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.994 [2024-11-26 04:19:01.725945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.994 [2024-11-26 04:19:01.726227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.994 [2024-11-26 04:19:01.726253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.994 [2024-11-26 04:19:01.729912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.994 [2024-11-26 04:19:01.730030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.994 [2024-11-26 04:19:01.730050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.994 [2024-11-26 04:19:01.733874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.994 [2024-11-26 04:19:01.734001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.994 [2024-11-26 04:19:01.734021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.994 [2024-11-26 04:19:01.737841] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.994 [2024-11-26 04:19:01.737936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.994 [2024-11-26 04:19:01.737956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.994 [2024-11-26 04:19:01.741930] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.994 [2024-11-26 04:19:01.742083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.994 [2024-11-26 04:19:01.742104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.994 [2024-11-26 04:19:01.745866] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.994 [2024-11-26 04:19:01.746023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.994 [2024-11-26 04:19:01.746044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.994 [2024-11-26 04:19:01.750172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:22:59.994 [2024-11-26 04:19:01.750268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.994 [2024-11-26 04:19:01.750289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.754795] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.256 [2024-11-26 04:19:01.754966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.256 [2024-11-26 04:19:01.754985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.759027] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.256 [2024-11-26 04:19:01.759312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.256 [2024-11-26 04:19:01.759336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.763352] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.256 [2024-11-26 04:19:01.763562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.256 [2024-11-26 04:19:01.763582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.767492] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.256 [2024-11-26 04:19:01.767710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.256 [2024-11-26 04:19:01.767875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.771670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.256 [2024-11-26 04:19:01.771913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.256 [2024-11-26 04:19:01.772135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.775829] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.256 [2024-11-26 04:19:01.776058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.256 [2024-11-26 04:19:01.776330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.779986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.256 [2024-11-26 04:19:01.780202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.256 [2024-11-26 04:19:01.780371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.784139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.256 [2024-11-26 04:19:01.784436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.256 [2024-11-26 04:19:01.784744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.788518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.256 [2024-11-26 04:19:01.788799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.256 [2024-11-26 04:19:01.788955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.792694] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.256 [2024-11-26 04:19:01.793018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.256 [2024-11-26 04:19:01.793317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.796892] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.256 [2024-11-26 04:19:01.797137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.256 [2024-11-26 04:19:01.797293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.801059] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.256 [2024-11-26 04:19:01.801171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.256 [2024-11-26 04:19:01.801193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.805043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.256 [2024-11-26 04:19:01.805120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.256 [2024-11-26 04:19:01.805140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.809087] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.256 [2024-11-26 04:19:01.809183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.256 [2024-11-26 04:19:01.809203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.813083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.256 [2024-11-26 04:19:01.813160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.256 [2024-11-26 04:19:01.813181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.817128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.256 [2024-11-26 04:19:01.817291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.256 [2024-11-26 04:19:01.817312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.821077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.256 [2024-11-26 04:19:01.821310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.256 [2024-11-26 04:19:01.821331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.825084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.256 [2024-11-26 04:19:01.825254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.256 [2024-11-26 04:19:01.825274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.829173] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.256 [2024-11-26 04:19:01.829358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.256 [2024-11-26 04:19:01.829379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.833189] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.256 [2024-11-26 04:19:01.833273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.256 [2024-11-26 04:19:01.833294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.837245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.256 [2024-11-26 04:19:01.837412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.256 [2024-11-26 04:19:01.837433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.841304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.256 [2024-11-26 04:19:01.841440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.256 [2024-11-26 04:19:01.841460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.845256] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.256 [2024-11-26 04:19:01.845355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.256 [2024-11-26 04:19:01.845376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.849211] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.256 [2024-11-26 04:19:01.849370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.256 [2024-11-26 04:19:01.849390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.256 [2024-11-26 04:19:01.853129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.853359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.853383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.857063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.857174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.857194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.861087] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.861224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.861244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.865081] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.865158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.865178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.869057] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.869189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.869208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.873071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.873165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.873184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.876925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.877007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.877027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.880955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.881104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.881125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.884891] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.885079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.885099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.888842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.889015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.889035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.892795] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.892929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.892949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.896704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.896798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.896819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.900755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.900882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.900901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.904665] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.904795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.904815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.908612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.908734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.908755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.912620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.912780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.912800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.916549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.916785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.916806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.920573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.920759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.920781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.924580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.924684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.924704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.928476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.928563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.928583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.932464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.932591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.932611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.936337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.936438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.936458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.940282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.940360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.940379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.944380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.944531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.944551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.948263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.948478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.948498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.952111] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.952251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.952271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.956103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.257 [2024-11-26 04:19:01.956205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.257 [2024-11-26 04:19:01.956225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.257 [2024-11-26 04:19:01.960039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.258 [2024-11-26 04:19:01.960121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.258 [2024-11-26 04:19:01.960141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.258 [2024-11-26 04:19:01.964036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.258 [2024-11-26 04:19:01.964215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.258 [2024-11-26 04:19:01.964235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.258 [2024-11-26 04:19:01.968073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.258 [2024-11-26 04:19:01.968181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.258 [2024-11-26 04:19:01.968201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.258 [2024-11-26 04:19:01.972057] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.258 [2024-11-26 04:19:01.972167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.258 [2024-11-26 04:19:01.972187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.258 [2024-11-26 04:19:01.976048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.258 [2024-11-26 04:19:01.976197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.258 [2024-11-26 04:19:01.976216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.258 [2024-11-26 04:19:01.979957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.258 [2024-11-26 04:19:01.980142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.258 [2024-11-26 04:19:01.980162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.258 [2024-11-26 04:19:01.983964] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.258 [2024-11-26 04:19:01.984163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.258 [2024-11-26 04:19:01.984183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.258 [2024-11-26 04:19:01.987906] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.258 [2024-11-26 04:19:01.988062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.258 [2024-11-26 04:19:01.988082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.258 [2024-11-26 04:19:01.991806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.258 [2024-11-26 04:19:01.991890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.258 [2024-11-26 04:19:01.991910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.258 [2024-11-26 04:19:01.995789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.258 [2024-11-26 04:19:01.995915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.258 [2024-11-26 04:19:01.995935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.258 [2024-11-26 04:19:01.999661] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.258 [2024-11-26 04:19:01.999865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.258 [2024-11-26 04:19:01.999885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.258 [2024-11-26 04:19:02.003704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.258 [2024-11-26 04:19:02.003897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.258 [2024-11-26 04:19:02.003918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.258 [2024-11-26 04:19:02.007757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.258 [2024-11-26 04:19:02.007943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.258 [2024-11-26 04:19:02.007964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.258 [2024-11-26 04:19:02.011857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.258 [2024-11-26 04:19:02.012070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.258 [2024-11-26 04:19:02.012091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.519 [2024-11-26 04:19:02.016272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.519 [2024-11-26 04:19:02.016498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.519 [2024-11-26 04:19:02.016517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.519 [2024-11-26 04:19:02.020511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.519 [2024-11-26 04:19:02.020635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.519 [2024-11-26 04:19:02.020657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.519 [2024-11-26 04:19:02.024794] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.519 [2024-11-26 04:19:02.024879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.519 [2024-11-26 04:19:02.024899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.519 [2024-11-26 04:19:02.028825] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.519 [2024-11-26 04:19:02.028962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.519 [2024-11-26 04:19:02.028982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.519 [2024-11-26 04:19:02.032920] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.519 [2024-11-26 04:19:02.033023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.519 [2024-11-26 04:19:02.033042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.519 [2024-11-26 04:19:02.036873] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.519 [2024-11-26 04:19:02.036957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.519 [2024-11-26 04:19:02.036976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.519 [2024-11-26 04:19:02.040960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.519 [2024-11-26 04:19:02.041109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.519 [2024-11-26 04:19:02.041130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.519 [2024-11-26 04:19:02.044985] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.519 [2024-11-26 04:19:02.045226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.519 [2024-11-26 04:19:02.045246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.519 [2024-11-26 04:19:02.048812] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.519 [2024-11-26 04:19:02.048920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.519 [2024-11-26 04:19:02.048940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.519 [2024-11-26 04:19:02.052816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.519 [2024-11-26 04:19:02.052952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.519 [2024-11-26 04:19:02.052972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.519 [2024-11-26 04:19:02.056811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.519 [2024-11-26 04:19:02.056892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.519 [2024-11-26 04:19:02.056912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.519 [2024-11-26 04:19:02.060801] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.519 [2024-11-26 04:19:02.060928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.519 [2024-11-26 04:19:02.060948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.519 [2024-11-26 04:19:02.064727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.519 [2024-11-26 04:19:02.064837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.064857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.068596] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.068705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.068738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.072634] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.072812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.072833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.076661] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.076918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.076943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.080600] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.080825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.080845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.084583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.084757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.084778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.088506] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.088635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.088654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.092486] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.092611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.092631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.096433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.096542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.096562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.100373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.100464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.100483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.104439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.104592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.104612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.108400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.108621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.108641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.112279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.112446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.112466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.116214] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.116335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.116355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.120135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.120218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.120237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.124176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.124336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.124357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.128086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.128194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.128214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.132039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.132182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.132202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.136125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.136277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.136297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.140103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.140299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.140319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.144247] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.144440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.144461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.148235] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.148368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.148388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.152250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.152343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.152363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.156276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.156403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.156423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.160243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.160335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.160355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.164264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.164344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.164363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.168350] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.520 [2024-11-26 04:19:02.168497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.520 [2024-11-26 04:19:02.168517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.520 [2024-11-26 04:19:02.172408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.172649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.172674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.176380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.176464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.176483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.180357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.180488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.180508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.184346] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.184428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.184447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.188263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.188415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.188435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.192213] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.192324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.192344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.196212] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.196300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.196320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.200238] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.200386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.200406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.204225] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.204450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.204475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.208240] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.208422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.208442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.212212] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.212360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.212380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.216164] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.216262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.216281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.220105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.220231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.220251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.224082] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.224194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.224214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.228082] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.228185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.228205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.232165] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.232316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.232336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.236119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.236291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.236310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.240128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.240262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.240282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.244053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.244161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.244180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.248043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.248172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.248191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.252048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.252225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.252246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.256075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.256193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.256214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.260127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.260240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.260260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.264308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.264477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.264498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.268465] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.268671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.268691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.272593] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.272876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.272900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.521 [2024-11-26 04:19:02.277012] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.521 [2024-11-26 04:19:02.277240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.521 [2024-11-26 04:19:02.277272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.281648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.281807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.281829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.285985] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.286206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.286228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.290442] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.290544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.290564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.294492] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.294603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.294622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.298531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.298681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.298700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.302463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.302653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.302672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.306405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.306578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.306598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.310296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.310469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.310488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.314344] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.314425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.314446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.318323] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.318456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.318475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.322264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.322374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.322393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.326228] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.326365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.326385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.330306] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.330458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.330477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.334179] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.334436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.334455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.338185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.338448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.338473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.342140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.342294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.342330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.345984] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.346112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.346134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.350011] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.350163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.350184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.354029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.354199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.354220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.358034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.358152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.358174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.361985] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.362168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.362188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.366006] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.366302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.366343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.369889] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.370071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.370091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.373742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.373924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.373944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.377802] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.378073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.378100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.783 [2024-11-26 04:19:02.381737] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.783 [2024-11-26 04:19:02.381921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.783 [2024-11-26 04:19:02.381941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.385685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.385804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.385824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.389645] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.389738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.389758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.393636] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.393843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.393864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.397622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.397747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.397767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.401543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.401652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.401671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.405518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.405665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.405685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.409523] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.409754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.409776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.413540] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.413696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.413744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.417520] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.417622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.417641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.421633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.421861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.421883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.426250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.426600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.426851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.430505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.430738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.430910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.434803] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.435068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.435348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.439082] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.439324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.439498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.443449] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.443722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.443771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.447874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.448151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.448178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.452164] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.452345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.452367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.456397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.456578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.456599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.460523] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.460631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.460653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.464659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.464815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.464838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.468926] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.469027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.469049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.473048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.473161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.473182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.477224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.477336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.477357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.481238] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.481355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.481376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.485405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.485587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.485608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.489412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.489663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.489684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.493381] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.784 [2024-11-26 04:19:02.493508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.784 [2024-11-26 04:19:02.493528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.784 [2024-11-26 04:19:02.497532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.785 [2024-11-26 04:19:02.497666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.785 [2024-11-26 04:19:02.497687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.785 [2024-11-26 04:19:02.501580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.785 [2024-11-26 04:19:02.501711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.785 [2024-11-26 04:19:02.501733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.785 [2024-11-26 04:19:02.505668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.785 [2024-11-26 04:19:02.505829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.785 [2024-11-26 04:19:02.505850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.785 [2024-11-26 04:19:02.509687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.785 [2024-11-26 04:19:02.509812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.785 [2024-11-26 04:19:02.509832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.785 [2024-11-26 04:19:02.513644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.785 [2024-11-26 04:19:02.513749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.785 [2024-11-26 04:19:02.513769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.785 [2024-11-26 04:19:02.517698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.785 [2024-11-26 04:19:02.517887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.785 [2024-11-26 04:19:02.517908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.785 [2024-11-26 04:19:02.521823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.785 [2024-11-26 04:19:02.522130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.785 [2024-11-26 04:19:02.522152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.785 [2024-11-26 04:19:02.525891] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.785 [2024-11-26 04:19:02.526108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.785 [2024-11-26 04:19:02.526130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.785 [2024-11-26 04:19:02.529869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.785 [2024-11-26 04:19:02.530079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.785 [2024-11-26 04:19:02.530100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.785 [2024-11-26 04:19:02.533840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.785 [2024-11-26 04:19:02.533962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.785 [2024-11-26 04:19:02.533982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.785 [2024-11-26 04:19:02.537818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.785 [2024-11-26 04:19:02.538045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.785 [2024-11-26 04:19:02.538067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.785 [2024-11-26 04:19:02.542436] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:00.785 [2024-11-26 04:19:02.542560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.785 [2024-11-26 04:19:02.542581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.046 [2024-11-26 04:19:02.546760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.046 [2024-11-26 04:19:02.546945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.046 [2024-11-26 04:19:02.546966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.046 [2024-11-26 04:19:02.551112] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.046 [2024-11-26 04:19:02.551269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.046 [2024-11-26 04:19:02.551290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.046 [2024-11-26 04:19:02.555247] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.046 [2024-11-26 04:19:02.555462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.046 [2024-11-26 04:19:02.555483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.046 [2024-11-26 04:19:02.559498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.046 [2024-11-26 04:19:02.559784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.046 [2024-11-26 04:19:02.559806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.046 [2024-11-26 04:19:02.563697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.046 [2024-11-26 04:19:02.563840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.046 [2024-11-26 04:19:02.563861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.046 [2024-11-26 04:19:02.567695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.046 [2024-11-26 04:19:02.567905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.046 [2024-11-26 04:19:02.567926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.046 [2024-11-26 04:19:02.571846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.046 [2024-11-26 04:19:02.571989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.046 [2024-11-26 04:19:02.572009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.046 [2024-11-26 04:19:02.575826] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.046 [2024-11-26 04:19:02.575927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.046 [2024-11-26 04:19:02.575947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.046 [2024-11-26 04:19:02.579942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.046 [2024-11-26 04:19:02.580073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.046 [2024-11-26 04:19:02.580095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.046 [2024-11-26 04:19:02.584051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.046 [2024-11-26 04:19:02.584198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.046 [2024-11-26 04:19:02.584219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.046 [2024-11-26 04:19:02.588069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.046 [2024-11-26 04:19:02.588191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.046 [2024-11-26 04:19:02.588212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.046 [2024-11-26 04:19:02.592037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.046 [2024-11-26 04:19:02.592214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.046 [2024-11-26 04:19:02.592235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.046 [2024-11-26 04:19:02.596050] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.046 [2024-11-26 04:19:02.596178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.046 [2024-11-26 04:19:02.596214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.046 [2024-11-26 04:19:02.600068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.046 [2024-11-26 04:19:02.600208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.046 [2024-11-26 04:19:02.600230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.046 [2024-11-26 04:19:02.604080] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.046 [2024-11-26 04:19:02.604246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.046 [2024-11-26 04:19:02.604267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.046 [2024-11-26 04:19:02.608117] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.046 [2024-11-26 04:19:02.608238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.046 [2024-11-26 04:19:02.608259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.046 [2024-11-26 04:19:02.612103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.046 [2024-11-26 04:19:02.612235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.046 [2024-11-26 04:19:02.612256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.046 [2024-11-26 04:19:02.616230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.046 [2024-11-26 04:19:02.616370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.046 [2024-11-26 04:19:02.616392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.046 [2024-11-26 04:19:02.620326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.046 [2024-11-26 04:19:02.620459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.046 [2024-11-26 04:19:02.620479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.047 [2024-11-26 04:19:02.624360] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.047 [2024-11-26 04:19:02.624510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.047 [2024-11-26 04:19:02.624530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.047 [2024-11-26 04:19:02.628378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.047 [2024-11-26 04:19:02.628507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.047 [2024-11-26 04:19:02.628528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.047 [2024-11-26 04:19:02.632340] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.047 [2024-11-26 04:19:02.632502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.047 [2024-11-26 04:19:02.632524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.047 [2024-11-26 04:19:02.636365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.047 [2024-11-26 04:19:02.636539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.047 [2024-11-26 04:19:02.636560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.047 [2024-11-26 04:19:02.640389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.047 [2024-11-26 04:19:02.640488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.047 [2024-11-26 04:19:02.640525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.047 [2024-11-26 04:19:02.644464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.047 [2024-11-26 04:19:02.644574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.047 [2024-11-26 04:19:02.644595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.047 [2024-11-26 04:19:02.648478] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.047 [2024-11-26 04:19:02.648631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.047 [2024-11-26 04:19:02.648652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.047 [2024-11-26 04:19:02.652438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.047 [2024-11-26 04:19:02.652554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.047 [2024-11-26 04:19:02.652574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.047 [2024-11-26 04:19:02.656464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.047 [2024-11-26 04:19:02.656616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.047 [2024-11-26 04:19:02.656637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.047 [2024-11-26 04:19:02.660515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.047 [2024-11-26 04:19:02.660621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.047 [2024-11-26 04:19:02.660642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.047 [2024-11-26 04:19:02.664565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.047 [2024-11-26 04:19:02.664671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.047 [2024-11-26 04:19:02.664692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.047 [2024-11-26 04:19:02.668599] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.047 [2024-11-26 04:19:02.668752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.047 [2024-11-26 04:19:02.668774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.047 [2024-11-26 04:19:02.672595] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.047 [2024-11-26 04:19:02.672671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.047 [2024-11-26 04:19:02.672691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.047 [2024-11-26 04:19:02.676514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.047 [2024-11-26 04:19:02.676617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.047 [2024-11-26 04:19:02.676638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.047 [2024-11-26 04:19:02.680568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.047 [2024-11-26 04:19:02.680736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.047 [2024-11-26 04:19:02.680758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.047 [2024-11-26 04:19:02.684597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.047 [2024-11-26 04:19:02.684694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.047 [2024-11-26 04:19:02.684741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.047 [2024-11-26 04:19:02.688666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.047 [2024-11-26 04:19:02.688820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.047 [2024-11-26 04:19:02.688841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.047 [2024-11-26 04:19:02.692725] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.047 [2024-11-26 04:19:02.692838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.047 [2024-11-26 04:19:02.692859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.047 [2024-11-26 04:19:02.696681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.047 [2024-11-26 04:19:02.696811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.047 [2024-11-26 04:19:02.696832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.047 [2024-11-26 04:19:02.700765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.047 [2024-11-26 04:19:02.700929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.047 [2024-11-26 04:19:02.700951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.047 [2024-11-26 04:19:02.704758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.047 [2024-11-26 04:19:02.704865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.048 [2024-11-26 04:19:02.704885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.048 [2024-11-26 04:19:02.708739] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.048 [2024-11-26 04:19:02.708863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.048 [2024-11-26 04:19:02.708883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.048 [2024-11-26 04:19:02.712773] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.048 [2024-11-26 04:19:02.712923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.048 [2024-11-26 04:19:02.712943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.048 [2024-11-26 04:19:02.716652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.048 [2024-11-26 04:19:02.716779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.048 [2024-11-26 04:19:02.716801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.048 [2024-11-26 04:19:02.720779] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.048 [2024-11-26 04:19:02.720939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.048 [2024-11-26 04:19:02.720959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.048 [2024-11-26 04:19:02.724847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.048 [2024-11-26 04:19:02.724981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.048 [2024-11-26 04:19:02.725002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.048 [2024-11-26 04:19:02.728875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.048 [2024-11-26 04:19:02.728964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.048 [2024-11-26 04:19:02.728985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.048 [2024-11-26 04:19:02.732931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.048 [2024-11-26 04:19:02.733067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.048 [2024-11-26 04:19:02.733088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.048 [2024-11-26 04:19:02.736943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.048 [2024-11-26 04:19:02.737067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.048 [2024-11-26 04:19:02.737088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.048 [2024-11-26 04:19:02.741010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.048 [2024-11-26 04:19:02.741138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.048 [2024-11-26 04:19:02.741158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.048 [2024-11-26 04:19:02.745134] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.048 [2024-11-26 04:19:02.745274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.048 [2024-11-26 04:19:02.745294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.048 [2024-11-26 04:19:02.749184] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.048 [2024-11-26 04:19:02.749285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.048 [2024-11-26 04:19:02.749307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.048 [2024-11-26 04:19:02.753244] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.048 [2024-11-26 04:19:02.753390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.048 [2024-11-26 04:19:02.753411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.048 [2024-11-26 04:19:02.757381] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.048 [2024-11-26 04:19:02.757492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.048 [2024-11-26 04:19:02.757512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.048 [2024-11-26 04:19:02.761452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.048 [2024-11-26 04:19:02.761572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.048 [2024-11-26 04:19:02.761593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.048 [2024-11-26 04:19:02.765579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.048 [2024-11-26 04:19:02.765766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.048 [2024-11-26 04:19:02.765787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.048 [2024-11-26 04:19:02.769686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.048 [2024-11-26 04:19:02.769789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.048 [2024-11-26 04:19:02.769809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.048 [2024-11-26 04:19:02.773684] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.048 [2024-11-26 04:19:02.773816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.048 [2024-11-26 04:19:02.773837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.048 [2024-11-26 04:19:02.777698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.048 [2024-11-26 04:19:02.777838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.048 [2024-11-26 04:19:02.777858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.048 [2024-11-26 04:19:02.781751] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.048 [2024-11-26 04:19:02.781881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.048 [2024-11-26 04:19:02.781902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.048 [2024-11-26 04:19:02.785868] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.048 [2024-11-26 04:19:02.786018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.048 [2024-11-26 04:19:02.786048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.048 [2024-11-26 04:19:02.789918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.048 [2024-11-26 04:19:02.790046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.048 [2024-11-26 04:19:02.790067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.048 [2024-11-26 04:19:02.793899] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.049 [2024-11-26 04:19:02.793980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.049 [2024-11-26 04:19:02.794010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.049 [2024-11-26 04:19:02.797961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.049 [2024-11-26 04:19:02.798120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.049 [2024-11-26 04:19:02.798141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.049 [2024-11-26 04:19:02.801966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.049 [2024-11-26 04:19:02.802108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.049 [2024-11-26 04:19:02.802129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.310 [2024-11-26 04:19:02.806563] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.310 [2024-11-26 04:19:02.806686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.310 [2024-11-26 04:19:02.806706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.310 [2024-11-26 04:19:02.810705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.310 [2024-11-26 04:19:02.810854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.310 [2024-11-26 04:19:02.810874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.310 [2024-11-26 04:19:02.815053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.310 [2024-11-26 04:19:02.815203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.310 [2024-11-26 04:19:02.815224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.310 [2024-11-26 04:19:02.819229] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.310 [2024-11-26 04:19:02.819392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.310 [2024-11-26 04:19:02.819412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.310 [2024-11-26 04:19:02.823276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.310 [2024-11-26 04:19:02.823498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.310 [2024-11-26 04:19:02.823519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.310 [2024-11-26 04:19:02.827343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.310 [2024-11-26 04:19:02.827537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.310 [2024-11-26 04:19:02.827558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.310 [2024-11-26 04:19:02.831535] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.310 [2024-11-26 04:19:02.831842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.310 [2024-11-26 04:19:02.831864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.310 [2024-11-26 04:19:02.835586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.310 [2024-11-26 04:19:02.835809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.310 [2024-11-26 04:19:02.835830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.310 [2024-11-26 04:19:02.839765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.310 [2024-11-26 04:19:02.839906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.310 [2024-11-26 04:19:02.839940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.310 [2024-11-26 04:19:02.843811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.310 [2024-11-26 04:19:02.843971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.310 [2024-11-26 04:19:02.843990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.310 [2024-11-26 04:19:02.847821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.310 [2024-11-26 04:19:02.847921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.310 [2024-11-26 04:19:02.847941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.310 [2024-11-26 04:19:02.851853] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.310 [2024-11-26 04:19:02.852029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.310 [2024-11-26 04:19:02.852051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.310 [2024-11-26 04:19:02.855897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.310 [2024-11-26 04:19:02.856031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.310 [2024-11-26 04:19:02.856051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.310 [2024-11-26 04:19:02.859831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.310 [2024-11-26 04:19:02.859909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.310 [2024-11-26 04:19:02.859928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.310 [2024-11-26 04:19:02.863848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.310 [2024-11-26 04:19:02.863995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.310 [2024-11-26 04:19:02.864014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.310 [2024-11-26 04:19:02.867794] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.867917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.867936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.871799] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.871926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.871947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.875824] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.875951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.875971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.879785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.879909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.879928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.883833] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.883966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.883986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.887830] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.887934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.887954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.891856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.891982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.892002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.895869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.895996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.896015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.899851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.899976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.899997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.903850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.903963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.903983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.907873] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.908006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.908026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.911865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.911975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.911995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.915898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.916058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.916078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.919894] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.919998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.920018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.923815] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.923892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.923911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.927807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.927960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.927980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.931799] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.931893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.931913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.935826] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.935931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.935950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.939853] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.939996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.940031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.943874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.943969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.943988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.947892] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.948019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.948039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.951870] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.951981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.952002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.955875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.955960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.955980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.959899] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.960049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.960069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.963844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.963928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.963948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.967853] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.967980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.967999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.311 [2024-11-26 04:19:02.971789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.311 [2024-11-26 04:19:02.971915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.311 [2024-11-26 04:19:02.971935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.312 [2024-11-26 04:19:02.975782] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.312 [2024-11-26 04:19:02.975894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.312 [2024-11-26 04:19:02.975914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.312 [2024-11-26 04:19:02.979695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.312 [2024-11-26 04:19:02.979918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.312 [2024-11-26 04:19:02.979939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.312 [2024-11-26 04:19:02.983990] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.312 [2024-11-26 04:19:02.984160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.312 [2024-11-26 04:19:02.984181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.312 [2024-11-26 04:19:02.987998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.312 [2024-11-26 04:19:02.988089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.312 [2024-11-26 04:19:02.988109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.312 [2024-11-26 04:19:02.992009] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.312 [2024-11-26 04:19:02.992168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.312 [2024-11-26 04:19:02.992188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.312 [2024-11-26 04:19:02.996039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.312 [2024-11-26 04:19:02.996142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.312 [2024-11-26 04:19:02.996161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.312 [2024-11-26 04:19:03.000014] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.312 [2024-11-26 04:19:03.000150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.312 [2024-11-26 04:19:03.000170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.312 [2024-11-26 04:19:03.003960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.312 [2024-11-26 04:19:03.004087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.312 [2024-11-26 04:19:03.004107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.312 [2024-11-26 04:19:03.007867] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.312 [2024-11-26 04:19:03.007949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.312 [2024-11-26 04:19:03.007968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.312 [2024-11-26 04:19:03.011880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.312 [2024-11-26 04:19:03.012032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.312 [2024-11-26 04:19:03.012053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.312 [2024-11-26 04:19:03.015908] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.312 [2024-11-26 04:19:03.016058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.312 [2024-11-26 04:19:03.016078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.312 [2024-11-26 04:19:03.019918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.312 [2024-11-26 04:19:03.019997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.312 [2024-11-26 04:19:03.020017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.312 [2024-11-26 04:19:03.023863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.312 [2024-11-26 04:19:03.024023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.312 [2024-11-26 04:19:03.024042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.312 [2024-11-26 04:19:03.027850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.312 [2024-11-26 04:19:03.027940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.312 [2024-11-26 04:19:03.027960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.312 [2024-11-26 04:19:03.031901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.312 [2024-11-26 04:19:03.032026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.312 [2024-11-26 04:19:03.032047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.312 [2024-11-26 04:19:03.035887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.312 [2024-11-26 04:19:03.036013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.312 [2024-11-26 04:19:03.036033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.312 [2024-11-26 04:19:03.039818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.312 [2024-11-26 04:19:03.039911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.312 [2024-11-26 04:19:03.039931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.312 [2024-11-26 04:19:03.043820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.312 [2024-11-26 04:19:03.043973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.312 [2024-11-26 04:19:03.043993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.312 [2024-11-26 04:19:03.047794] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.312 [2024-11-26 04:19:03.047896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.312 [2024-11-26 04:19:03.047916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.312 [2024-11-26 04:19:03.051822] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.312 [2024-11-26 04:19:03.051946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.312 [2024-11-26 04:19:03.051966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.312 [2024-11-26 04:19:03.055853] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.312 [2024-11-26 04:19:03.055995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.312 [2024-11-26 04:19:03.056015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.312 [2024-11-26 04:19:03.059915] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.312 [2024-11-26 04:19:03.060018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.312 [2024-11-26 04:19:03.060039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.312 [2024-11-26 04:19:03.063891] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb280) with pdu=0x2000190fef90 00:23:01.312 [2024-11-26 04:19:03.064020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.312 [2024-11-26 04:19:03.064040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.312 00:23:01.312 Latency(us) 00:23:01.312 [2024-11-26T04:19:03.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.312 [2024-11-26T04:19:03.080Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:01.312 nvme0n1 : 2.00 7665.62 958.20 0.00 0.00 2082.72 1444.77 4676.89 00:23:01.312 [2024-11-26T04:19:03.080Z] =================================================================================================================== 00:23:01.312 [2024-11-26T04:19:03.080Z] Total : 7665.62 958.20 0.00 0.00 2082.72 1444.77 4676.89 00:23:01.312 0 00:23:01.571 04:19:03 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:01.571 04:19:03 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:01.571 04:19:03 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:01.571 | .driver_specific 00:23:01.571 | .nvme_error 00:23:01.571 | .status_code 00:23:01.571 | .command_transient_transport_error' 00:23:01.571 04:19:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:01.831 04:19:03 -- host/digest.sh@71 -- # (( 494 > 0 )) 00:23:01.831 04:19:03 -- host/digest.sh@73 -- # killprocess 98095 00:23:01.831 04:19:03 -- common/autotest_common.sh@936 -- # '[' -z 98095 ']' 00:23:01.831 04:19:03 -- common/autotest_common.sh@940 -- # kill -0 98095 00:23:01.831 04:19:03 -- common/autotest_common.sh@941 -- # uname 00:23:01.831 04:19:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:01.831 04:19:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98095 00:23:01.831 04:19:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:01.831 04:19:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:01.831 04:19:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98095' 00:23:01.831 killing process with pid 98095 00:23:01.831 Received shutdown signal, test time was about 2.000000 seconds 00:23:01.831 00:23:01.831 Latency(us) 00:23:01.831 [2024-11-26T04:19:03.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.831 [2024-11-26T04:19:03.599Z] =================================================================================================================== 00:23:01.831 [2024-11-26T04:19:03.599Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:01.831 04:19:03 -- common/autotest_common.sh@955 -- # kill 98095 00:23:01.831 04:19:03 -- common/autotest_common.sh@960 -- # wait 98095 00:23:01.831 04:19:03 -- host/digest.sh@115 -- # killprocess 97799 00:23:01.831 04:19:03 -- common/autotest_common.sh@936 -- # '[' -z 97799 ']' 00:23:01.831 04:19:03 -- common/autotest_common.sh@940 -- # kill -0 97799 00:23:01.831 04:19:03 -- common/autotest_common.sh@941 -- # uname 00:23:01.831 04:19:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:01.831 04:19:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97799 00:23:02.090 killing process with pid 97799 00:23:02.090 04:19:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:02.090 04:19:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:02.090 04:19:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97799' 00:23:02.090 04:19:03 -- common/autotest_common.sh@955 -- # kill 97799 00:23:02.090 04:19:03 -- common/autotest_common.sh@960 -- # wait 97799 00:23:02.090 00:23:02.090 real 0m17.830s 00:23:02.090 user 0m33.447s 00:23:02.090 sys 0m5.465s 00:23:02.349 ************************************ 00:23:02.349 END TEST nvmf_digest_error 00:23:02.349 ************************************ 00:23:02.349 04:19:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:02.349 04:19:03 -- common/autotest_common.sh@10 -- # set +x 00:23:02.349 04:19:03 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:23:02.349 04:19:03 -- host/digest.sh@139 -- # nvmftestfini 00:23:02.349 04:19:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:02.349 04:19:03 -- nvmf/common.sh@116 -- # sync 00:23:02.349 04:19:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:02.349 04:19:03 -- nvmf/common.sh@119 -- # set +e 00:23:02.349 04:19:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:02.349 04:19:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:02.349 rmmod nvme_tcp 00:23:02.349 rmmod nvme_fabrics 00:23:02.349 rmmod nvme_keyring 00:23:02.349 04:19:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:02.349 Process with pid 97799 is not found 00:23:02.349 04:19:04 -- nvmf/common.sh@123 -- # set -e 00:23:02.349 04:19:04 -- nvmf/common.sh@124 -- # return 0 00:23:02.349 04:19:04 -- nvmf/common.sh@477 -- # '[' -n 97799 ']' 00:23:02.349 04:19:04 -- nvmf/common.sh@478 -- # killprocess 97799 00:23:02.349 04:19:04 -- common/autotest_common.sh@936 -- # '[' -z 97799 ']' 00:23:02.349 04:19:04 -- common/autotest_common.sh@940 -- # kill -0 97799 00:23:02.349 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (97799) - No such process 00:23:02.349 04:19:04 -- common/autotest_common.sh@963 -- # echo 'Process with pid 97799 is not found' 00:23:02.349 04:19:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:02.349 04:19:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:02.349 04:19:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:02.349 04:19:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:02.349 04:19:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:02.349 04:19:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.349 04:19:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:02.349 04:19:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.349 04:19:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:02.349 00:23:02.349 real 0m36.105s 00:23:02.349 user 1m5.019s 00:23:02.349 sys 0m11.246s 00:23:02.349 ************************************ 00:23:02.349 END TEST nvmf_digest 00:23:02.349 ************************************ 00:23:02.349 04:19:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:02.349 04:19:04 -- common/autotest_common.sh@10 -- # set +x 00:23:02.349 04:19:04 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:23:02.349 04:19:04 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:23:02.349 04:19:04 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:02.349 04:19:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:02.349 04:19:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:02.349 04:19:04 -- common/autotest_common.sh@10 -- # set +x 00:23:02.349 ************************************ 00:23:02.349 START TEST nvmf_mdns_discovery 00:23:02.349 ************************************ 00:23:02.349 04:19:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:02.609 * Looking for test storage... 00:23:02.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:02.609 04:19:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:02.609 04:19:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:02.609 04:19:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:02.609 04:19:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:02.609 04:19:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:02.609 04:19:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:02.609 04:19:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:02.609 04:19:04 -- scripts/common.sh@335 -- # IFS=.-: 00:23:02.609 04:19:04 -- scripts/common.sh@335 -- # read -ra ver1 00:23:02.609 04:19:04 -- scripts/common.sh@336 -- # IFS=.-: 00:23:02.609 04:19:04 -- scripts/common.sh@336 -- # read -ra ver2 00:23:02.609 04:19:04 -- scripts/common.sh@337 -- # local 'op=<' 00:23:02.609 04:19:04 -- scripts/common.sh@339 -- # ver1_l=2 00:23:02.609 04:19:04 -- scripts/common.sh@340 -- # ver2_l=1 00:23:02.609 04:19:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:02.609 04:19:04 -- scripts/common.sh@343 -- # case "$op" in 00:23:02.609 04:19:04 -- scripts/common.sh@344 -- # : 1 00:23:02.609 04:19:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:02.609 04:19:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:02.609 04:19:04 -- scripts/common.sh@364 -- # decimal 1 00:23:02.609 04:19:04 -- scripts/common.sh@352 -- # local d=1 00:23:02.609 04:19:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:02.609 04:19:04 -- scripts/common.sh@354 -- # echo 1 00:23:02.609 04:19:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:02.609 04:19:04 -- scripts/common.sh@365 -- # decimal 2 00:23:02.609 04:19:04 -- scripts/common.sh@352 -- # local d=2 00:23:02.609 04:19:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:02.609 04:19:04 -- scripts/common.sh@354 -- # echo 2 00:23:02.609 04:19:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:02.609 04:19:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:02.609 04:19:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:02.609 04:19:04 -- scripts/common.sh@367 -- # return 0 00:23:02.609 04:19:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:02.609 04:19:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:02.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.609 --rc genhtml_branch_coverage=1 00:23:02.609 --rc genhtml_function_coverage=1 00:23:02.609 --rc genhtml_legend=1 00:23:02.609 --rc geninfo_all_blocks=1 00:23:02.609 --rc geninfo_unexecuted_blocks=1 00:23:02.609 00:23:02.609 ' 00:23:02.609 04:19:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:02.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.609 --rc genhtml_branch_coverage=1 00:23:02.609 --rc genhtml_function_coverage=1 00:23:02.609 --rc genhtml_legend=1 00:23:02.609 --rc geninfo_all_blocks=1 00:23:02.609 --rc geninfo_unexecuted_blocks=1 00:23:02.609 00:23:02.609 ' 00:23:02.609 04:19:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:02.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.609 --rc genhtml_branch_coverage=1 00:23:02.609 --rc genhtml_function_coverage=1 00:23:02.609 --rc genhtml_legend=1 00:23:02.609 --rc geninfo_all_blocks=1 00:23:02.609 --rc geninfo_unexecuted_blocks=1 00:23:02.609 00:23:02.609 ' 00:23:02.609 04:19:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:02.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.609 --rc genhtml_branch_coverage=1 00:23:02.609 --rc genhtml_function_coverage=1 00:23:02.609 --rc genhtml_legend=1 00:23:02.609 --rc geninfo_all_blocks=1 00:23:02.609 --rc geninfo_unexecuted_blocks=1 00:23:02.609 00:23:02.609 ' 00:23:02.609 04:19:04 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:02.609 04:19:04 -- nvmf/common.sh@7 -- # uname -s 00:23:02.609 04:19:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.609 04:19:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.609 04:19:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.609 04:19:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.609 04:19:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.609 04:19:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.609 04:19:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.609 04:19:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.609 04:19:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.609 04:19:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.609 04:19:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:23:02.609 04:19:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:23:02.609 04:19:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.609 04:19:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.609 04:19:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:02.609 04:19:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:02.609 04:19:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.609 04:19:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.609 04:19:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.609 04:19:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.609 04:19:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.609 04:19:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.609 04:19:04 -- paths/export.sh@5 -- # export PATH 00:23:02.609 04:19:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.609 04:19:04 -- nvmf/common.sh@46 -- # : 0 00:23:02.609 04:19:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:02.609 04:19:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:02.609 04:19:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:02.609 04:19:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.609 04:19:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.609 04:19:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:02.610 04:19:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:02.610 04:19:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:02.610 04:19:04 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:23:02.610 04:19:04 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:23:02.610 04:19:04 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:02.610 04:19:04 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:02.610 04:19:04 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:23:02.610 04:19:04 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:02.610 04:19:04 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:23:02.610 04:19:04 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:23:02.610 04:19:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:02.610 04:19:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.610 04:19:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:02.610 04:19:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:02.610 04:19:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:02.610 04:19:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.610 04:19:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:02.610 04:19:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.610 04:19:04 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:02.610 04:19:04 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:02.610 04:19:04 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:02.610 04:19:04 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:02.610 04:19:04 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:02.610 04:19:04 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:02.610 04:19:04 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:02.610 04:19:04 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:02.610 04:19:04 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:02.610 04:19:04 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:02.610 04:19:04 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:02.610 04:19:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:02.610 04:19:04 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:02.610 04:19:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:02.610 04:19:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:02.610 04:19:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:02.610 04:19:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:02.610 04:19:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:02.610 04:19:04 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:02.610 04:19:04 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:02.610 Cannot find device "nvmf_tgt_br" 00:23:02.610 04:19:04 -- nvmf/common.sh@154 -- # true 00:23:02.610 04:19:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:02.868 Cannot find device "nvmf_tgt_br2" 00:23:02.868 04:19:04 -- nvmf/common.sh@155 -- # true 00:23:02.868 04:19:04 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:02.868 04:19:04 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:02.868 Cannot find device "nvmf_tgt_br" 00:23:02.868 04:19:04 -- nvmf/common.sh@157 -- # true 00:23:02.869 04:19:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:02.869 Cannot find device "nvmf_tgt_br2" 00:23:02.869 04:19:04 -- nvmf/common.sh@158 -- # true 00:23:02.869 04:19:04 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:02.869 04:19:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:02.869 04:19:04 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:02.869 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:02.869 04:19:04 -- nvmf/common.sh@161 -- # true 00:23:02.869 04:19:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:02.869 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:02.869 04:19:04 -- nvmf/common.sh@162 -- # true 00:23:02.869 04:19:04 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:02.869 04:19:04 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:02.869 04:19:04 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:02.869 04:19:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:02.869 04:19:04 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:02.869 04:19:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:02.869 04:19:04 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:02.869 04:19:04 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:02.869 04:19:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:02.869 04:19:04 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:02.869 04:19:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:02.869 04:19:04 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:02.869 04:19:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:02.869 04:19:04 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:02.869 04:19:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:02.869 04:19:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:02.869 04:19:04 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:02.869 04:19:04 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:02.869 04:19:04 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:03.127 04:19:04 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:03.127 04:19:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:03.127 04:19:04 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:03.127 04:19:04 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:03.127 04:19:04 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:03.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:03.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:23:03.127 00:23:03.127 --- 10.0.0.2 ping statistics --- 00:23:03.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.127 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:23:03.127 04:19:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:03.127 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:03.127 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:23:03.127 00:23:03.127 --- 10.0.0.3 ping statistics --- 00:23:03.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.127 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:23:03.127 04:19:04 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:03.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:03.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:23:03.127 00:23:03.127 --- 10.0.0.1 ping statistics --- 00:23:03.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.127 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:23:03.127 04:19:04 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:03.127 04:19:04 -- nvmf/common.sh@421 -- # return 0 00:23:03.127 04:19:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:03.127 04:19:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:03.127 04:19:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:03.127 04:19:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:03.127 04:19:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:03.127 04:19:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:03.127 04:19:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:03.127 04:19:04 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:03.127 04:19:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:03.127 04:19:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:03.128 04:19:04 -- common/autotest_common.sh@10 -- # set +x 00:23:03.128 04:19:04 -- nvmf/common.sh@469 -- # nvmfpid=98395 00:23:03.128 04:19:04 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:03.128 04:19:04 -- nvmf/common.sh@470 -- # waitforlisten 98395 00:23:03.128 04:19:04 -- common/autotest_common.sh@829 -- # '[' -z 98395 ']' 00:23:03.128 04:19:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.128 04:19:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:03.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.128 04:19:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.128 04:19:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:03.128 04:19:04 -- common/autotest_common.sh@10 -- # set +x 00:23:03.128 [2024-11-26 04:19:04.754819] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:03.128 [2024-11-26 04:19:04.754907] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.406 [2024-11-26 04:19:04.895174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.406 [2024-11-26 04:19:04.966379] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:03.406 [2024-11-26 04:19:04.966792] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.406 [2024-11-26 04:19:04.966903] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.406 [2024-11-26 04:19:04.966987] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.406 [2024-11-26 04:19:04.967096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.351 04:19:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:04.351 04:19:05 -- common/autotest_common.sh@862 -- # return 0 00:23:04.351 04:19:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:04.351 04:19:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:04.351 04:19:05 -- common/autotest_common.sh@10 -- # set +x 00:23:04.351 04:19:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:04.351 04:19:05 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:23:04.351 04:19:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.351 04:19:05 -- common/autotest_common.sh@10 -- # set +x 00:23:04.351 04:19:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.351 04:19:05 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:23:04.351 04:19:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.351 04:19:05 -- common/autotest_common.sh@10 -- # set +x 00:23:04.351 04:19:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.351 04:19:05 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:04.351 04:19:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.351 04:19:05 -- common/autotest_common.sh@10 -- # set +x 00:23:04.351 [2024-11-26 04:19:05.906525] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.351 04:19:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.351 04:19:05 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:04.351 04:19:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.351 04:19:05 -- common/autotest_common.sh@10 -- # set +x 00:23:04.351 [2024-11-26 04:19:05.918679] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:04.351 04:19:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.351 04:19:05 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:04.351 04:19:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.351 04:19:05 -- common/autotest_common.sh@10 -- # set +x 00:23:04.351 null0 00:23:04.351 04:19:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.351 04:19:05 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:04.351 04:19:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.351 04:19:05 -- common/autotest_common.sh@10 -- # set +x 00:23:04.351 null1 00:23:04.351 04:19:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.351 04:19:05 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:23:04.351 04:19:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.351 04:19:05 -- common/autotest_common.sh@10 -- # set +x 00:23:04.351 null2 00:23:04.351 04:19:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.351 04:19:05 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:23:04.351 04:19:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.351 04:19:05 -- common/autotest_common.sh@10 -- # set +x 00:23:04.351 null3 00:23:04.351 04:19:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.351 04:19:05 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:23:04.351 04:19:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.351 04:19:05 -- common/autotest_common.sh@10 -- # set +x 00:23:04.351 04:19:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.351 04:19:05 -- host/mdns_discovery.sh@47 -- # hostpid=98445 00:23:04.351 04:19:05 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:04.351 04:19:05 -- host/mdns_discovery.sh@48 -- # waitforlisten 98445 /tmp/host.sock 00:23:04.351 04:19:05 -- common/autotest_common.sh@829 -- # '[' -z 98445 ']' 00:23:04.352 04:19:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:04.352 04:19:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:04.352 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:04.352 04:19:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:04.352 04:19:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:04.352 04:19:05 -- common/autotest_common.sh@10 -- # set +x 00:23:04.352 [2024-11-26 04:19:06.021340] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:04.352 [2024-11-26 04:19:06.021437] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98445 ] 00:23:04.611 [2024-11-26 04:19:06.157875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.611 [2024-11-26 04:19:06.238431] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:04.611 [2024-11-26 04:19:06.238630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.548 04:19:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:05.548 04:19:06 -- common/autotest_common.sh@862 -- # return 0 00:23:05.548 04:19:06 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:23:05.548 04:19:06 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:23:05.548 04:19:06 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:23:05.548 04:19:07 -- host/mdns_discovery.sh@57 -- # avahipid=98475 00:23:05.548 04:19:07 -- host/mdns_discovery.sh@58 -- # sleep 1 00:23:05.548 04:19:07 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:23:05.548 04:19:07 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:23:05.548 Process 1062 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:23:05.548 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:23:05.548 Successfully dropped root privileges. 00:23:05.548 avahi-daemon 0.8 starting up. 00:23:05.548 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:23:05.548 Successfully called chroot(). 00:23:05.548 Successfully dropped remaining capabilities. 00:23:05.548 No service file found in /etc/avahi/services. 00:23:06.482 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:06.482 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:23:06.482 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:06.482 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:23:06.482 Network interface enumeration completed. 00:23:06.482 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:23:06.482 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:23:06.482 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:23:06.482 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:23:06.482 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 217980396. 00:23:06.482 04:19:08 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:06.482 04:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.482 04:19:08 -- common/autotest_common.sh@10 -- # set +x 00:23:06.482 04:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.482 04:19:08 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:06.482 04:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.482 04:19:08 -- common/autotest_common.sh@10 -- # set +x 00:23:06.482 04:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.482 04:19:08 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:23:06.482 04:19:08 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:23:06.482 04:19:08 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:06.483 04:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.483 04:19:08 -- host/mdns_discovery.sh@68 -- # xargs 00:23:06.483 04:19:08 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:06.483 04:19:08 -- host/mdns_discovery.sh@68 -- # sort 00:23:06.483 04:19:08 -- common/autotest_common.sh@10 -- # set +x 00:23:06.483 04:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.483 04:19:08 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:23:06.483 04:19:08 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:23:06.483 04:19:08 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:06.483 04:19:08 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.483 04:19:08 -- host/mdns_discovery.sh@64 -- # sort 00:23:06.483 04:19:08 -- host/mdns_discovery.sh@64 -- # xargs 00:23:06.483 04:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.483 04:19:08 -- common/autotest_common.sh@10 -- # set +x 00:23:06.483 04:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.483 04:19:08 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:23:06.483 04:19:08 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:06.483 04:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.483 04:19:08 -- common/autotest_common.sh@10 -- # set +x 00:23:06.483 04:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.483 04:19:08 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:23:06.483 04:19:08 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:06.483 04:19:08 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:06.483 04:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.483 04:19:08 -- host/mdns_discovery.sh@68 -- # sort 00:23:06.483 04:19:08 -- common/autotest_common.sh@10 -- # set +x 00:23:06.483 04:19:08 -- host/mdns_discovery.sh@68 -- # xargs 00:23:06.483 04:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.741 04:19:08 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:23:06.741 04:19:08 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:23:06.741 04:19:08 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.741 04:19:08 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:06.741 04:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.741 04:19:08 -- host/mdns_discovery.sh@64 -- # xargs 00:23:06.741 04:19:08 -- host/mdns_discovery.sh@64 -- # sort 00:23:06.741 04:19:08 -- common/autotest_common.sh@10 -- # set +x 00:23:06.741 04:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.741 04:19:08 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:23:06.741 04:19:08 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:06.741 04:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.741 04:19:08 -- common/autotest_common.sh@10 -- # set +x 00:23:06.741 04:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.742 04:19:08 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:23:06.742 04:19:08 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:06.742 04:19:08 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:06.742 04:19:08 -- host/mdns_discovery.sh@68 -- # sort 00:23:06.742 04:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.742 04:19:08 -- host/mdns_discovery.sh@68 -- # xargs 00:23:06.742 04:19:08 -- common/autotest_common.sh@10 -- # set +x 00:23:06.742 04:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.742 [2024-11-26 04:19:08.380502] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:06.742 04:19:08 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:23:06.742 04:19:08 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:23:06.742 04:19:08 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.742 04:19:08 -- host/mdns_discovery.sh@64 -- # sort 00:23:06.742 04:19:08 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:06.742 04:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.742 04:19:08 -- common/autotest_common.sh@10 -- # set +x 00:23:06.742 04:19:08 -- host/mdns_discovery.sh@64 -- # xargs 00:23:06.742 04:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.742 04:19:08 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:23:06.742 04:19:08 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:06.742 04:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.742 04:19:08 -- common/autotest_common.sh@10 -- # set +x 00:23:06.742 [2024-11-26 04:19:08.443319] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.742 04:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.742 04:19:08 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:06.742 04:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.742 04:19:08 -- common/autotest_common.sh@10 -- # set +x 00:23:06.742 04:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.742 04:19:08 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:23:06.742 04:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.742 04:19:08 -- common/autotest_common.sh@10 -- # set +x 00:23:06.742 04:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.742 04:19:08 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:23:06.742 04:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.742 04:19:08 -- common/autotest_common.sh@10 -- # set +x 00:23:06.742 04:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.742 04:19:08 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:23:06.742 04:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.742 04:19:08 -- common/autotest_common.sh@10 -- # set +x 00:23:06.742 04:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.742 04:19:08 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:23:06.742 04:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.742 04:19:08 -- common/autotest_common.sh@10 -- # set +x 00:23:06.742 [2024-11-26 04:19:08.483251] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:06.742 04:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.742 04:19:08 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:06.742 04:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.742 04:19:08 -- common/autotest_common.sh@10 -- # set +x 00:23:06.742 [2024-11-26 04:19:08.491250] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:06.742 04:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.742 04:19:08 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=98532 00:23:06.742 04:19:08 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:23:06.742 04:19:08 -- host/mdns_discovery.sh@125 -- # sleep 5 00:23:07.677 Established under name 'CDC' 00:23:07.677 [2024-11-26 04:19:09.280499] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:07.935 [2024-11-26 04:19:09.680514] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:07.935 [2024-11-26 04:19:09.680537] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:07.935 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:07.935 cookie is 0 00:23:07.935 is_local: 1 00:23:07.935 our_own: 0 00:23:07.935 wide_area: 0 00:23:07.935 multicast: 1 00:23:07.935 cached: 1 00:23:08.194 [2024-11-26 04:19:09.780506] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:08.194 [2024-11-26 04:19:09.780527] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:08.194 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:08.194 cookie is 0 00:23:08.194 is_local: 1 00:23:08.194 our_own: 0 00:23:08.194 wide_area: 0 00:23:08.194 multicast: 1 00:23:08.194 cached: 1 00:23:09.130 [2024-11-26 04:19:10.691622] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:09.130 [2024-11-26 04:19:10.691658] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:09.130 [2024-11-26 04:19:10.691674] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:09.130 [2024-11-26 04:19:10.778720] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:23:09.130 [2024-11-26 04:19:10.791387] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:09.130 [2024-11-26 04:19:10.791405] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:09.130 [2024-11-26 04:19:10.791422] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:09.130 [2024-11-26 04:19:10.843066] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:09.130 [2024-11-26 04:19:10.843090] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:09.130 [2024-11-26 04:19:10.877223] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:23:09.388 [2024-11-26 04:19:10.931753] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:09.388 [2024-11-26 04:19:10.931776] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:11.922 04:19:13 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:23:11.922 04:19:13 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:11.922 04:19:13 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:11.922 04:19:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.922 04:19:13 -- common/autotest_common.sh@10 -- # set +x 00:23:11.922 04:19:13 -- host/mdns_discovery.sh@80 -- # sort 00:23:11.922 04:19:13 -- host/mdns_discovery.sh@80 -- # xargs 00:23:11.922 04:19:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.922 04:19:13 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:23:11.922 04:19:13 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:23:11.922 04:19:13 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:11.922 04:19:13 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:11.922 04:19:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.922 04:19:13 -- host/mdns_discovery.sh@76 -- # sort 00:23:11.922 04:19:13 -- common/autotest_common.sh@10 -- # set +x 00:23:11.922 04:19:13 -- host/mdns_discovery.sh@76 -- # xargs 00:23:11.922 04:19:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.922 04:19:13 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:11.922 04:19:13 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:23:11.922 04:19:13 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:11.922 04:19:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.922 04:19:13 -- common/autotest_common.sh@10 -- # set +x 00:23:11.922 04:19:13 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:11.922 04:19:13 -- host/mdns_discovery.sh@68 -- # sort 00:23:11.922 04:19:13 -- host/mdns_discovery.sh@68 -- # xargs 00:23:11.922 04:19:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.922 04:19:13 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:11.922 04:19:13 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:23:11.922 04:19:13 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:11.922 04:19:13 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:11.922 04:19:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.922 04:19:13 -- common/autotest_common.sh@10 -- # set +x 00:23:11.922 04:19:13 -- host/mdns_discovery.sh@64 -- # sort 00:23:11.922 04:19:13 -- host/mdns_discovery.sh@64 -- # xargs 00:23:12.181 04:19:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.181 04:19:13 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:23:12.181 04:19:13 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:23:12.181 04:19:13 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:12.181 04:19:13 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:12.181 04:19:13 -- host/mdns_discovery.sh@72 -- # xargs 00:23:12.181 04:19:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.181 04:19:13 -- common/autotest_common.sh@10 -- # set +x 00:23:12.181 04:19:13 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:12.181 04:19:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.181 04:19:13 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:23:12.181 04:19:13 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:23:12.181 04:19:13 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:12.181 04:19:13 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:12.181 04:19:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.181 04:19:13 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:12.181 04:19:13 -- common/autotest_common.sh@10 -- # set +x 00:23:12.181 04:19:13 -- host/mdns_discovery.sh@72 -- # xargs 00:23:12.181 04:19:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.181 04:19:13 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:23:12.181 04:19:13 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:23:12.181 04:19:13 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:12.181 04:19:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.181 04:19:13 -- common/autotest_common.sh@10 -- # set +x 00:23:12.181 04:19:13 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:12.181 04:19:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.181 04:19:13 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:12.181 04:19:13 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:23:12.181 04:19:13 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:23:12.181 04:19:13 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:12.181 04:19:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.181 04:19:13 -- common/autotest_common.sh@10 -- # set +x 00:23:12.181 04:19:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.181 04:19:13 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:23:12.181 04:19:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.181 04:19:13 -- common/autotest_common.sh@10 -- # set +x 00:23:12.181 04:19:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.181 04:19:13 -- host/mdns_discovery.sh@139 -- # sleep 1 00:23:13.559 04:19:14 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:23:13.559 04:19:14 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.559 04:19:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.559 04:19:14 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:13.559 04:19:14 -- common/autotest_common.sh@10 -- # set +x 00:23:13.559 04:19:14 -- host/mdns_discovery.sh@64 -- # sort 00:23:13.559 04:19:14 -- host/mdns_discovery.sh@64 -- # xargs 00:23:13.559 04:19:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.559 04:19:14 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:13.559 04:19:14 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:23:13.559 04:19:14 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:13.559 04:19:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.559 04:19:14 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:13.559 04:19:14 -- common/autotest_common.sh@10 -- # set +x 00:23:13.559 04:19:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.559 04:19:14 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:13.559 04:19:14 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:13.559 04:19:14 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:23:13.559 04:19:14 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:13.559 04:19:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.559 04:19:14 -- common/autotest_common.sh@10 -- # set +x 00:23:13.559 [2024-11-26 04:19:15.001681] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:13.559 [2024-11-26 04:19:15.002788] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:13.559 [2024-11-26 04:19:15.002816] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:13.559 [2024-11-26 04:19:15.002846] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:13.559 [2024-11-26 04:19:15.002857] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:13.559 04:19:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.559 04:19:15 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:23:13.559 04:19:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.559 04:19:15 -- common/autotest_common.sh@10 -- # set +x 00:23:13.559 [2024-11-26 04:19:15.009623] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:13.559 [2024-11-26 04:19:15.009803] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:13.559 [2024-11-26 04:19:15.010796] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:13.559 04:19:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.559 04:19:15 -- host/mdns_discovery.sh@149 -- # sleep 1 00:23:13.559 [2024-11-26 04:19:15.140887] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:23:13.559 [2024-11-26 04:19:15.141884] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:23:13.559 [2024-11-26 04:19:15.205055] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:13.559 [2024-11-26 04:19:15.205076] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:13.559 [2024-11-26 04:19:15.205082] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:13.559 [2024-11-26 04:19:15.205096] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:13.559 [2024-11-26 04:19:15.205145] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:13.559 [2024-11-26 04:19:15.205155] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:13.559 [2024-11-26 04:19:15.205159] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:13.559 [2024-11-26 04:19:15.205171] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:13.559 [2024-11-26 04:19:15.250976] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:13.559 [2024-11-26 04:19:15.250994] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:13.559 [2024-11-26 04:19:15.251031] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:13.559 [2024-11-26 04:19:15.251038] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:14.495 04:19:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.495 04:19:16 -- common/autotest_common.sh@10 -- # set +x 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@68 -- # sort 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@68 -- # xargs 00:23:14.495 04:19:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:14.495 04:19:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@64 -- # sort 00:23:14.495 04:19:16 -- common/autotest_common.sh@10 -- # set +x 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@64 -- # xargs 00:23:14.495 04:19:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:14.495 04:19:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@72 -- # xargs 00:23:14.495 04:19:16 -- common/autotest_common.sh@10 -- # set +x 00:23:14.495 04:19:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:14.495 04:19:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.495 04:19:16 -- common/autotest_common.sh@10 -- # set +x 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@72 -- # xargs 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:14.495 04:19:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:14.495 04:19:16 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:14.495 04:19:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.495 04:19:16 -- common/autotest_common.sh@10 -- # set +x 00:23:14.495 04:19:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.756 04:19:16 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:14.756 04:19:16 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:14.756 04:19:16 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:23:14.756 04:19:16 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:14.756 04:19:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.756 04:19:16 -- common/autotest_common.sh@10 -- # set +x 00:23:14.756 [2024-11-26 04:19:16.282571] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:14.756 [2024-11-26 04:19:16.282599] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:14.756 [2024-11-26 04:19:16.282627] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:14.756 [2024-11-26 04:19:16.282638] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:14.756 04:19:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.756 04:19:16 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:14.756 04:19:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.756 04:19:16 -- common/autotest_common.sh@10 -- # set +x 00:23:14.756 [2024-11-26 04:19:16.290587] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:14.756 [2024-11-26 04:19:16.290634] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:14.756 [2024-11-26 04:19:16.291664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.756 [2024-11-26 04:19:16.291700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.756 [2024-11-26 04:19:16.291720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.756 [2024-11-26 04:19:16.291730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.756 [2024-11-26 04:19:16.291738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.756 [2024-11-26 04:19:16.291745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.756 [2024-11-26 04:19:16.291754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.756 [2024-11-26 04:19:16.291761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.756 [2024-11-26 04:19:16.291770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1561aa0 is same with the state(5) to be set 00:23:14.756 [2024-11-26 04:19:16.293664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.756 [2024-11-26 04:19:16.293833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.756 [2024-11-26 04:19:16.293944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.756 [2024-11-26 04:19:16.294087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.756 [2024-11-26 04:19:16.294193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.756 [2024-11-26 04:19:16.294308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.756 [2024-11-26 04:19:16.294492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.756 [2024-11-26 04:19:16.294602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.756 [2024-11-26 04:19:16.294726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154c760 is same 04:19:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.756 with the state(5) to be set 00:23:14.756 04:19:16 -- host/mdns_discovery.sh@162 -- # sleep 1 00:23:14.757 [2024-11-26 04:19:16.301633] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1561aa0 (9): Bad file descriptor 00:23:14.757 [2024-11-26 04:19:16.303628] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154c760 (9): Bad file descriptor 00:23:14.757 [2024-11-26 04:19:16.311648] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:14.757 [2024-11-26 04:19:16.311769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.757 [2024-11-26 04:19:16.311826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.757 [2024-11-26 04:19:16.311841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1561aa0 with addr=10.0.0.2, port=4420 00:23:14.757 [2024-11-26 04:19:16.311850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1561aa0 is same with the state(5) to be set 00:23:14.757 [2024-11-26 04:19:16.311864] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1561aa0 (9): Bad file descriptor 00:23:14.757 [2024-11-26 04:19:16.311876] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:14.757 [2024-11-26 04:19:16.311885] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:14.757 [2024-11-26 04:19:16.311893] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:14.757 [2024-11-26 04:19:16.311907] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.757 [2024-11-26 04:19:16.313636] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:14.757 [2024-11-26 04:19:16.313706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.757 [2024-11-26 04:19:16.313769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.757 [2024-11-26 04:19:16.313783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x154c760 with addr=10.0.0.3, port=4420 00:23:14.757 [2024-11-26 04:19:16.313792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154c760 is same with the state(5) to be set 00:23:14.757 [2024-11-26 04:19:16.313806] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154c760 (9): Bad file descriptor 00:23:14.757 [2024-11-26 04:19:16.313817] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:14.757 [2024-11-26 04:19:16.313825] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:14.757 [2024-11-26 04:19:16.313833] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:14.757 [2024-11-26 04:19:16.313845] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.757 [2024-11-26 04:19:16.321696] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:14.757 [2024-11-26 04:19:16.321775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.757 [2024-11-26 04:19:16.321812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.757 [2024-11-26 04:19:16.321826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1561aa0 with addr=10.0.0.2, port=4420 00:23:14.757 [2024-11-26 04:19:16.321842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1561aa0 is same with the state(5) to be set 00:23:14.757 [2024-11-26 04:19:16.321862] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1561aa0 (9): Bad file descriptor 00:23:14.757 [2024-11-26 04:19:16.321873] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:14.757 [2024-11-26 04:19:16.321881] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:14.757 [2024-11-26 04:19:16.321888] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:14.757 [2024-11-26 04:19:16.321900] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.757 [2024-11-26 04:19:16.323681] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:14.757 [2024-11-26 04:19:16.323761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.757 [2024-11-26 04:19:16.323799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.757 [2024-11-26 04:19:16.323812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x154c760 with addr=10.0.0.3, port=4420 00:23:14.757 [2024-11-26 04:19:16.323821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154c760 is same with the state(5) to be set 00:23:14.757 [2024-11-26 04:19:16.323835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154c760 (9): Bad file descriptor 00:23:14.757 [2024-11-26 04:19:16.323846] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:14.757 [2024-11-26 04:19:16.323853] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:14.757 [2024-11-26 04:19:16.323861] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:14.757 [2024-11-26 04:19:16.323873] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.757 [2024-11-26 04:19:16.331752] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:14.757 [2024-11-26 04:19:16.331820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.757 [2024-11-26 04:19:16.331857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.757 [2024-11-26 04:19:16.331870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1561aa0 with addr=10.0.0.2, port=4420 00:23:14.757 [2024-11-26 04:19:16.331879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1561aa0 is same with the state(5) to be set 00:23:14.757 [2024-11-26 04:19:16.331892] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1561aa0 (9): Bad file descriptor 00:23:14.757 [2024-11-26 04:19:16.331904] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:14.757 [2024-11-26 04:19:16.331911] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:14.757 [2024-11-26 04:19:16.331919] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:14.757 [2024-11-26 04:19:16.331931] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.757 [2024-11-26 04:19:16.333736] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:14.757 [2024-11-26 04:19:16.333801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.757 [2024-11-26 04:19:16.333837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.757 [2024-11-26 04:19:16.333850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x154c760 with addr=10.0.0.3, port=4420 00:23:14.757 [2024-11-26 04:19:16.333859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154c760 is same with the state(5) to be set 00:23:14.757 [2024-11-26 04:19:16.333873] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154c760 (9): Bad file descriptor 00:23:14.757 [2024-11-26 04:19:16.333885] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:14.757 [2024-11-26 04:19:16.333892] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:14.757 [2024-11-26 04:19:16.333900] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:14.757 [2024-11-26 04:19:16.333912] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.757 [2024-11-26 04:19:16.341797] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:14.757 [2024-11-26 04:19:16.341872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.757 [2024-11-26 04:19:16.341909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.757 [2024-11-26 04:19:16.341923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1561aa0 with addr=10.0.0.2, port=4420 00:23:14.757 [2024-11-26 04:19:16.341932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1561aa0 is same with the state(5) to be set 00:23:14.757 [2024-11-26 04:19:16.341945] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1561aa0 (9): Bad file descriptor 00:23:14.757 [2024-11-26 04:19:16.341957] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:14.757 [2024-11-26 04:19:16.341965] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:14.757 [2024-11-26 04:19:16.341972] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:14.757 [2024-11-26 04:19:16.341984] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.757 [2024-11-26 04:19:16.343776] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:14.757 [2024-11-26 04:19:16.343842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.757 [2024-11-26 04:19:16.343879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.757 [2024-11-26 04:19:16.343892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x154c760 with addr=10.0.0.3, port=4420 00:23:14.757 [2024-11-26 04:19:16.343901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154c760 is same with the state(5) to be set 00:23:14.757 [2024-11-26 04:19:16.343914] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154c760 (9): Bad file descriptor 00:23:14.757 [2024-11-26 04:19:16.343926] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:14.757 [2024-11-26 04:19:16.343933] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:14.757 [2024-11-26 04:19:16.343941] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:14.757 [2024-11-26 04:19:16.343953] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.757 [2024-11-26 04:19:16.351844] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:14.757 [2024-11-26 04:19:16.351908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.757 [2024-11-26 04:19:16.351944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.757 [2024-11-26 04:19:16.351957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1561aa0 with addr=10.0.0.2, port=4420 00:23:14.757 [2024-11-26 04:19:16.351966] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1561aa0 is same with the state(5) to be set 00:23:14.757 [2024-11-26 04:19:16.351979] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1561aa0 (9): Bad file descriptor 00:23:14.757 [2024-11-26 04:19:16.351991] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:14.757 [2024-11-26 04:19:16.351999] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:14.757 [2024-11-26 04:19:16.352006] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:14.757 [2024-11-26 04:19:16.352018] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.757 [2024-11-26 04:19:16.353817] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:14.757 [2024-11-26 04:19:16.353878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.758 [2024-11-26 04:19:16.353914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.758 [2024-11-26 04:19:16.353927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x154c760 with addr=10.0.0.3, port=4420 00:23:14.758 [2024-11-26 04:19:16.353936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154c760 is same with the state(5) to be set 00:23:14.758 [2024-11-26 04:19:16.353948] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154c760 (9): Bad file descriptor 00:23:14.758 [2024-11-26 04:19:16.353960] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:14.758 [2024-11-26 04:19:16.353967] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:14.758 [2024-11-26 04:19:16.353975] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:14.758 [2024-11-26 04:19:16.353987] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.758 [2024-11-26 04:19:16.361884] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:14.758 [2024-11-26 04:19:16.361947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.758 [2024-11-26 04:19:16.361982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.758 [2024-11-26 04:19:16.362004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1561aa0 with addr=10.0.0.2, port=4420 00:23:14.758 [2024-11-26 04:19:16.362013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1561aa0 is same with the state(5) to be set 00:23:14.758 [2024-11-26 04:19:16.362027] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1561aa0 (9): Bad file descriptor 00:23:14.758 [2024-11-26 04:19:16.362038] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:14.758 [2024-11-26 04:19:16.362046] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:14.758 [2024-11-26 04:19:16.362054] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:14.758 [2024-11-26 04:19:16.362080] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.758 [2024-11-26 04:19:16.363854] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:14.758 [2024-11-26 04:19:16.363913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.758 [2024-11-26 04:19:16.363949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.758 [2024-11-26 04:19:16.363962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x154c760 with addr=10.0.0.3, port=4420 00:23:14.758 [2024-11-26 04:19:16.363970] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154c760 is same with the state(5) to be set 00:23:14.758 [2024-11-26 04:19:16.363985] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154c760 (9): Bad file descriptor 00:23:14.758 [2024-11-26 04:19:16.363997] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:14.758 [2024-11-26 04:19:16.364004] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:14.758 [2024-11-26 04:19:16.364012] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:14.758 [2024-11-26 04:19:16.364023] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.758 [2024-11-26 04:19:16.371923] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:14.758 [2024-11-26 04:19:16.371988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.758 [2024-11-26 04:19:16.372025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.758 [2024-11-26 04:19:16.372038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1561aa0 with addr=10.0.0.2, port=4420 00:23:14.758 [2024-11-26 04:19:16.372047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1561aa0 is same with the state(5) to be set 00:23:14.758 [2024-11-26 04:19:16.372060] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1561aa0 (9): Bad file descriptor 00:23:14.758 [2024-11-26 04:19:16.372087] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:14.758 [2024-11-26 04:19:16.372096] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:14.758 [2024-11-26 04:19:16.372103] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:14.758 [2024-11-26 04:19:16.372115] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.758 [2024-11-26 04:19:16.373890] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:14.758 [2024-11-26 04:19:16.373949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.758 [2024-11-26 04:19:16.373985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.758 [2024-11-26 04:19:16.374006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x154c760 with addr=10.0.0.3, port=4420 00:23:14.758 [2024-11-26 04:19:16.374016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154c760 is same with the state(5) to be set 00:23:14.758 [2024-11-26 04:19:16.374029] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154c760 (9): Bad file descriptor 00:23:14.758 [2024-11-26 04:19:16.374041] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:14.758 [2024-11-26 04:19:16.374049] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:14.758 [2024-11-26 04:19:16.374056] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:14.758 [2024-11-26 04:19:16.374068] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.758 [2024-11-26 04:19:16.381965] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:14.758 [2024-11-26 04:19:16.382039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.758 [2024-11-26 04:19:16.382077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.758 [2024-11-26 04:19:16.382091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1561aa0 with addr=10.0.0.2, port=4420 00:23:14.758 [2024-11-26 04:19:16.382100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1561aa0 is same with the state(5) to be set 00:23:14.758 [2024-11-26 04:19:16.382113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1561aa0 (9): Bad file descriptor 00:23:14.758 [2024-11-26 04:19:16.382159] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:14.758 [2024-11-26 04:19:16.382169] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:14.758 [2024-11-26 04:19:16.382178] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:14.758 [2024-11-26 04:19:16.382190] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.758 [2024-11-26 04:19:16.383926] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:14.758 [2024-11-26 04:19:16.383987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.758 [2024-11-26 04:19:16.384023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.758 [2024-11-26 04:19:16.384036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x154c760 with addr=10.0.0.3, port=4420 00:23:14.758 [2024-11-26 04:19:16.384044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154c760 is same with the state(5) to be set 00:23:14.758 [2024-11-26 04:19:16.384058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154c760 (9): Bad file descriptor 00:23:14.758 [2024-11-26 04:19:16.384070] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:14.758 [2024-11-26 04:19:16.384077] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:14.758 [2024-11-26 04:19:16.384085] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:14.758 [2024-11-26 04:19:16.384097] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.758 [2024-11-26 04:19:16.392014] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:14.758 [2024-11-26 04:19:16.392076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.758 [2024-11-26 04:19:16.392113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.758 [2024-11-26 04:19:16.392127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1561aa0 with addr=10.0.0.2, port=4420 00:23:14.758 [2024-11-26 04:19:16.392135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1561aa0 is same with the state(5) to be set 00:23:14.758 [2024-11-26 04:19:16.392148] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1561aa0 (9): Bad file descriptor 00:23:14.758 [2024-11-26 04:19:16.392174] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:14.758 [2024-11-26 04:19:16.392183] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:14.758 [2024-11-26 04:19:16.392191] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:14.758 [2024-11-26 04:19:16.392203] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.758 [2024-11-26 04:19:16.393963] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:14.758 [2024-11-26 04:19:16.394029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.758 [2024-11-26 04:19:16.394066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.758 [2024-11-26 04:19:16.394086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x154c760 with addr=10.0.0.3, port=4420 00:23:14.758 [2024-11-26 04:19:16.394097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154c760 is same with the state(5) to be set 00:23:14.758 [2024-11-26 04:19:16.394110] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154c760 (9): Bad file descriptor 00:23:14.758 [2024-11-26 04:19:16.394121] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:14.758 [2024-11-26 04:19:16.394128] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:14.758 [2024-11-26 04:19:16.394136] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:14.758 [2024-11-26 04:19:16.394148] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.758 [2024-11-26 04:19:16.402053] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:14.758 [2024-11-26 04:19:16.402114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.758 [2024-11-26 04:19:16.402150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.758 [2024-11-26 04:19:16.402163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1561aa0 with addr=10.0.0.2, port=4420 00:23:14.758 [2024-11-26 04:19:16.402171] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1561aa0 is same with the state(5) to be set 00:23:14.759 [2024-11-26 04:19:16.402184] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1561aa0 (9): Bad file descriptor 00:23:14.759 [2024-11-26 04:19:16.402209] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:14.759 [2024-11-26 04:19:16.402218] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:14.759 [2024-11-26 04:19:16.402225] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:14.759 [2024-11-26 04:19:16.402237] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.759 [2024-11-26 04:19:16.403999] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:14.759 [2024-11-26 04:19:16.404058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.759 [2024-11-26 04:19:16.404094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.759 [2024-11-26 04:19:16.404106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x154c760 with addr=10.0.0.3, port=4420 00:23:14.759 [2024-11-26 04:19:16.404115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154c760 is same with the state(5) to be set 00:23:14.759 [2024-11-26 04:19:16.404128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154c760 (9): Bad file descriptor 00:23:14.759 [2024-11-26 04:19:16.404140] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:14.759 [2024-11-26 04:19:16.404147] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:14.759 [2024-11-26 04:19:16.404155] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:14.759 [2024-11-26 04:19:16.404166] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.759 [2024-11-26 04:19:16.412091] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:14.759 [2024-11-26 04:19:16.412151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.759 [2024-11-26 04:19:16.412187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.759 [2024-11-26 04:19:16.412201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1561aa0 with addr=10.0.0.2, port=4420 00:23:14.759 [2024-11-26 04:19:16.412210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1561aa0 is same with the state(5) to be set 00:23:14.759 [2024-11-26 04:19:16.412222] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1561aa0 (9): Bad file descriptor 00:23:14.759 [2024-11-26 04:19:16.412246] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:14.759 [2024-11-26 04:19:16.412255] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:14.759 [2024-11-26 04:19:16.412263] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:14.759 [2024-11-26 04:19:16.412274] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.759 [2024-11-26 04:19:16.414035] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:14.759 [2024-11-26 04:19:16.414095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.759 [2024-11-26 04:19:16.414130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.759 [2024-11-26 04:19:16.414143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x154c760 with addr=10.0.0.3, port=4420 00:23:14.759 [2024-11-26 04:19:16.414151] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154c760 is same with the state(5) to be set 00:23:14.759 [2024-11-26 04:19:16.414164] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154c760 (9): Bad file descriptor 00:23:14.759 [2024-11-26 04:19:16.414176] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:14.759 [2024-11-26 04:19:16.414183] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:14.759 [2024-11-26 04:19:16.414190] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:14.759 [2024-11-26 04:19:16.414202] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.759 [2024-11-26 04:19:16.422129] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:14.759 [2024-11-26 04:19:16.422190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.759 [2024-11-26 04:19:16.422226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.759 [2024-11-26 04:19:16.422239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1561aa0 with addr=10.0.0.2, port=4420 00:23:14.759 [2024-11-26 04:19:16.422248] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1561aa0 is same with the state(5) to be set 00:23:14.759 [2024-11-26 04:19:16.422261] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1561aa0 (9): Bad file descriptor 00:23:14.759 [2024-11-26 04:19:16.422293] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:14.759 [2024-11-26 04:19:16.422309] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:14.759 [2024-11-26 04:19:16.422324] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:14.759 [2024-11-26 04:19:16.422352] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:23:14.759 [2024-11-26 04:19:16.422365] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:14.759 [2024-11-26 04:19:16.422376] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:14.759 [2024-11-26 04:19:16.422399] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:14.759 [2024-11-26 04:19:16.422411] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:14.759 [2024-11-26 04:19:16.422419] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:14.759 [2024-11-26 04:19:16.422440] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.759 [2024-11-26 04:19:16.508352] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:14.759 [2024-11-26 04:19:16.508400] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:15.697 04:19:17 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:23:15.697 04:19:17 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:15.697 04:19:17 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:15.697 04:19:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.697 04:19:17 -- common/autotest_common.sh@10 -- # set +x 00:23:15.697 04:19:17 -- host/mdns_discovery.sh@68 -- # sort 00:23:15.697 04:19:17 -- host/mdns_discovery.sh@68 -- # xargs 00:23:15.697 04:19:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.697 04:19:17 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:15.697 04:19:17 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:23:15.697 04:19:17 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:15.697 04:19:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.697 04:19:17 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:15.697 04:19:17 -- common/autotest_common.sh@10 -- # set +x 00:23:15.697 04:19:17 -- host/mdns_discovery.sh@64 -- # sort 00:23:15.697 04:19:17 -- host/mdns_discovery.sh@64 -- # xargs 00:23:15.697 04:19:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.697 04:19:17 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:15.697 04:19:17 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:23:15.697 04:19:17 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:15.697 04:19:17 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:15.697 04:19:17 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:15.697 04:19:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.697 04:19:17 -- common/autotest_common.sh@10 -- # set +x 00:23:15.697 04:19:17 -- host/mdns_discovery.sh@72 -- # xargs 00:23:15.697 04:19:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.697 04:19:17 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:23:15.697 04:19:17 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:23:15.697 04:19:17 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:15.955 04:19:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.955 04:19:17 -- common/autotest_common.sh@10 -- # set +x 00:23:15.955 04:19:17 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:15.955 04:19:17 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:15.955 04:19:17 -- host/mdns_discovery.sh@72 -- # xargs 00:23:15.955 04:19:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.956 04:19:17 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:23:15.956 04:19:17 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:23:15.956 04:19:17 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:15.956 04:19:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.956 04:19:17 -- common/autotest_common.sh@10 -- # set +x 00:23:15.956 04:19:17 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:15.956 04:19:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.956 04:19:17 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:15.956 04:19:17 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:15.956 04:19:17 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:23:15.956 04:19:17 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:15.956 04:19:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.956 04:19:17 -- common/autotest_common.sh@10 -- # set +x 00:23:15.956 04:19:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.956 04:19:17 -- host/mdns_discovery.sh@172 -- # sleep 1 00:23:15.956 [2024-11-26 04:19:17.580508] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:16.892 04:19:18 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:23:16.892 04:19:18 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:16.892 04:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.892 04:19:18 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:16.892 04:19:18 -- common/autotest_common.sh@10 -- # set +x 00:23:16.892 04:19:18 -- host/mdns_discovery.sh@80 -- # sort 00:23:16.892 04:19:18 -- host/mdns_discovery.sh@80 -- # xargs 00:23:16.892 04:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.892 04:19:18 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:23:16.892 04:19:18 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:23:16.892 04:19:18 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.892 04:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.892 04:19:18 -- common/autotest_common.sh@10 -- # set +x 00:23:16.892 04:19:18 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:16.892 04:19:18 -- host/mdns_discovery.sh@68 -- # sort 00:23:16.892 04:19:18 -- host/mdns_discovery.sh@68 -- # xargs 00:23:16.892 04:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.152 04:19:18 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:23:17.152 04:19:18 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:17.152 04:19:18 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.152 04:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.152 04:19:18 -- common/autotest_common.sh@10 -- # set +x 00:23:17.152 04:19:18 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:17.152 04:19:18 -- host/mdns_discovery.sh@64 -- # sort 00:23:17.152 04:19:18 -- host/mdns_discovery.sh@64 -- # xargs 00:23:17.152 04:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.152 04:19:18 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:23:17.152 04:19:18 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:17.152 04:19:18 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:17.152 04:19:18 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:17.152 04:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.152 04:19:18 -- common/autotest_common.sh@10 -- # set +x 00:23:17.152 04:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.152 04:19:18 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:23:17.152 04:19:18 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:23:17.152 04:19:18 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:23:17.152 04:19:18 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:17.152 04:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.152 04:19:18 -- common/autotest_common.sh@10 -- # set +x 00:23:17.152 04:19:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.152 04:19:18 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:17.152 04:19:18 -- common/autotest_common.sh@650 -- # local es=0 00:23:17.152 04:19:18 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:17.152 04:19:18 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:17.152 04:19:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:17.152 04:19:18 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:17.152 04:19:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:17.152 04:19:18 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:17.152 04:19:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.152 04:19:18 -- common/autotest_common.sh@10 -- # set +x 00:23:17.152 [2024-11-26 04:19:18.811421] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:17.152 2024/11/26 04:19:18 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:17.152 request: 00:23:17.152 { 00:23:17.152 "method": "bdev_nvme_start_mdns_discovery", 00:23:17.152 "params": { 00:23:17.152 "name": "mdns", 00:23:17.152 "svcname": "_nvme-disc._http", 00:23:17.152 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:17.152 } 00:23:17.152 } 00:23:17.152 Got JSON-RPC error response 00:23:17.152 GoRPCClient: error on JSON-RPC call 00:23:17.152 04:19:18 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:17.152 04:19:18 -- common/autotest_common.sh@653 -- # es=1 00:23:17.152 04:19:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:17.152 04:19:18 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:17.152 04:19:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:17.152 04:19:18 -- host/mdns_discovery.sh@183 -- # sleep 5 00:23:17.719 [2024-11-26 04:19:19.200100] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:17.719 [2024-11-26 04:19:19.300100] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:17.719 [2024-11-26 04:19:19.400105] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:17.720 [2024-11-26 04:19:19.400260] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:17.720 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:17.720 cookie is 0 00:23:17.720 is_local: 1 00:23:17.720 our_own: 0 00:23:17.720 wide_area: 0 00:23:17.720 multicast: 1 00:23:17.720 cached: 1 00:23:17.978 [2024-11-26 04:19:19.500103] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:17.978 [2024-11-26 04:19:19.500257] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:17.978 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:17.978 cookie is 0 00:23:17.978 is_local: 1 00:23:17.978 our_own: 0 00:23:17.978 wide_area: 0 00:23:17.978 multicast: 1 00:23:17.978 cached: 1 00:23:18.916 [2024-11-26 04:19:20.413101] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:18.916 [2024-11-26 04:19:20.413264] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:18.916 [2024-11-26 04:19:20.413316] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:18.916 [2024-11-26 04:19:20.499184] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:23:18.916 [2024-11-26 04:19:20.512994] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:18.916 [2024-11-26 04:19:20.513125] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:18.916 [2024-11-26 04:19:20.513175] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:18.916 [2024-11-26 04:19:20.569009] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:18.916 [2024-11-26 04:19:20.569173] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:18.916 [2024-11-26 04:19:20.599351] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:23:18.916 [2024-11-26 04:19:20.658051] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:18.916 [2024-11-26 04:19:20.658233] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:22.204 04:19:23 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:23:22.204 04:19:23 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:22.204 04:19:23 -- host/mdns_discovery.sh@80 -- # sort 00:23:22.204 04:19:23 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:22.204 04:19:23 -- host/mdns_discovery.sh@80 -- # xargs 00:23:22.204 04:19:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.204 04:19:23 -- common/autotest_common.sh@10 -- # set +x 00:23:22.204 04:19:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.204 04:19:23 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:23:22.204 04:19:23 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:23:22.204 04:19:23 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:22.204 04:19:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.204 04:19:23 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:22.204 04:19:23 -- common/autotest_common.sh@10 -- # set +x 00:23:22.204 04:19:23 -- host/mdns_discovery.sh@76 -- # sort 00:23:22.204 04:19:23 -- host/mdns_discovery.sh@76 -- # xargs 00:23:22.204 04:19:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.204 04:19:23 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:22.204 04:19:23 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:22.204 04:19:23 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.204 04:19:23 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:22.204 04:19:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.204 04:19:23 -- common/autotest_common.sh@10 -- # set +x 00:23:22.204 04:19:23 -- host/mdns_discovery.sh@64 -- # sort 00:23:22.204 04:19:23 -- host/mdns_discovery.sh@64 -- # xargs 00:23:22.464 04:19:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.464 04:19:23 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:22.464 04:19:23 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:22.464 04:19:23 -- common/autotest_common.sh@650 -- # local es=0 00:23:22.464 04:19:23 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:22.464 04:19:23 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:22.464 04:19:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:22.464 04:19:23 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:22.464 04:19:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:22.464 04:19:23 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:22.464 04:19:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.464 04:19:23 -- common/autotest_common.sh@10 -- # set +x 00:23:22.464 [2024-11-26 04:19:23.999290] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:22.464 2024/11/26 04:19:24 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:22.464 request: 00:23:22.464 { 00:23:22.464 "method": "bdev_nvme_start_mdns_discovery", 00:23:22.464 "params": { 00:23:22.464 "name": "cdc", 00:23:22.464 "svcname": "_nvme-disc._tcp", 00:23:22.464 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:22.464 } 00:23:22.464 } 00:23:22.464 Got JSON-RPC error response 00:23:22.464 GoRPCClient: error on JSON-RPC call 00:23:22.464 04:19:24 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:22.464 04:19:24 -- common/autotest_common.sh@653 -- # es=1 00:23:22.464 04:19:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:22.464 04:19:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:22.464 04:19:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:22.464 04:19:24 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:23:22.464 04:19:24 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:22.464 04:19:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.464 04:19:24 -- common/autotest_common.sh@10 -- # set +x 00:23:22.464 04:19:24 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:22.464 04:19:24 -- host/mdns_discovery.sh@76 -- # xargs 00:23:22.464 04:19:24 -- host/mdns_discovery.sh@76 -- # sort 00:23:22.464 04:19:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.464 04:19:24 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:22.464 04:19:24 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:23:22.464 04:19:24 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.464 04:19:24 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:22.464 04:19:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.464 04:19:24 -- common/autotest_common.sh@10 -- # set +x 00:23:22.464 04:19:24 -- host/mdns_discovery.sh@64 -- # xargs 00:23:22.464 04:19:24 -- host/mdns_discovery.sh@64 -- # sort 00:23:22.464 04:19:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.464 04:19:24 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:22.464 04:19:24 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:22.464 04:19:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.464 04:19:24 -- common/autotest_common.sh@10 -- # set +x 00:23:22.464 04:19:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.464 04:19:24 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:23:22.464 04:19:24 -- host/mdns_discovery.sh@197 -- # kill 98445 00:23:22.464 04:19:24 -- host/mdns_discovery.sh@200 -- # wait 98445 00:23:22.724 [2024-11-26 04:19:24.264160] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:22.724 04:19:24 -- host/mdns_discovery.sh@201 -- # kill 98532 00:23:22.724 Got SIGTERM, quitting. 00:23:22.724 04:19:24 -- host/mdns_discovery.sh@202 -- # kill 98475 00:23:22.724 Got SIGTERM, quitting. 00:23:22.724 04:19:24 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:23:22.724 04:19:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:22.724 04:19:24 -- nvmf/common.sh@116 -- # sync 00:23:22.724 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:22.724 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:22.724 avahi-daemon 0.8 exiting. 00:23:22.724 04:19:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:22.724 04:19:24 -- nvmf/common.sh@119 -- # set +e 00:23:22.724 04:19:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:22.724 04:19:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:22.724 rmmod nvme_tcp 00:23:22.724 rmmod nvme_fabrics 00:23:22.724 rmmod nvme_keyring 00:23:22.724 04:19:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:22.724 04:19:24 -- nvmf/common.sh@123 -- # set -e 00:23:22.724 04:19:24 -- nvmf/common.sh@124 -- # return 0 00:23:22.724 04:19:24 -- nvmf/common.sh@477 -- # '[' -n 98395 ']' 00:23:22.724 04:19:24 -- nvmf/common.sh@478 -- # killprocess 98395 00:23:22.725 04:19:24 -- common/autotest_common.sh@936 -- # '[' -z 98395 ']' 00:23:22.725 04:19:24 -- common/autotest_common.sh@940 -- # kill -0 98395 00:23:22.725 04:19:24 -- common/autotest_common.sh@941 -- # uname 00:23:22.983 04:19:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:22.983 04:19:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98395 00:23:22.983 killing process with pid 98395 00:23:22.983 04:19:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:22.983 04:19:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:22.983 04:19:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98395' 00:23:22.983 04:19:24 -- common/autotest_common.sh@955 -- # kill 98395 00:23:22.983 04:19:24 -- common/autotest_common.sh@960 -- # wait 98395 00:23:22.983 04:19:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:22.983 04:19:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:22.983 04:19:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:22.983 04:19:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:22.983 04:19:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:22.983 04:19:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.983 04:19:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.983 04:19:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.983 04:19:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:22.983 ************************************ 00:23:22.983 END TEST nvmf_mdns_discovery 00:23:22.983 ************************************ 00:23:22.983 00:23:22.983 real 0m20.630s 00:23:22.983 user 0m40.184s 00:23:22.983 sys 0m1.958s 00:23:22.983 04:19:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:22.983 04:19:24 -- common/autotest_common.sh@10 -- # set +x 00:23:23.242 04:19:24 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:23:23.242 04:19:24 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:23.242 04:19:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:23.242 04:19:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:23.242 04:19:24 -- common/autotest_common.sh@10 -- # set +x 00:23:23.242 ************************************ 00:23:23.242 START TEST nvmf_multipath 00:23:23.242 ************************************ 00:23:23.242 04:19:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:23.242 * Looking for test storage... 00:23:23.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:23.242 04:19:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:23.242 04:19:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:23.242 04:19:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:23.242 04:19:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:23.242 04:19:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:23.242 04:19:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:23.242 04:19:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:23.242 04:19:24 -- scripts/common.sh@335 -- # IFS=.-: 00:23:23.242 04:19:24 -- scripts/common.sh@335 -- # read -ra ver1 00:23:23.242 04:19:24 -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.242 04:19:24 -- scripts/common.sh@336 -- # read -ra ver2 00:23:23.242 04:19:24 -- scripts/common.sh@337 -- # local 'op=<' 00:23:23.242 04:19:24 -- scripts/common.sh@339 -- # ver1_l=2 00:23:23.242 04:19:24 -- scripts/common.sh@340 -- # ver2_l=1 00:23:23.242 04:19:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:23.242 04:19:24 -- scripts/common.sh@343 -- # case "$op" in 00:23:23.242 04:19:24 -- scripts/common.sh@344 -- # : 1 00:23:23.242 04:19:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:23.242 04:19:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.242 04:19:24 -- scripts/common.sh@364 -- # decimal 1 00:23:23.242 04:19:24 -- scripts/common.sh@352 -- # local d=1 00:23:23.242 04:19:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.242 04:19:24 -- scripts/common.sh@354 -- # echo 1 00:23:23.242 04:19:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:23.242 04:19:24 -- scripts/common.sh@365 -- # decimal 2 00:23:23.242 04:19:24 -- scripts/common.sh@352 -- # local d=2 00:23:23.242 04:19:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:23.242 04:19:24 -- scripts/common.sh@354 -- # echo 2 00:23:23.242 04:19:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:23.242 04:19:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:23.242 04:19:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:23.242 04:19:24 -- scripts/common.sh@367 -- # return 0 00:23:23.242 04:19:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:23.242 04:19:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:23.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.242 --rc genhtml_branch_coverage=1 00:23:23.242 --rc genhtml_function_coverage=1 00:23:23.242 --rc genhtml_legend=1 00:23:23.242 --rc geninfo_all_blocks=1 00:23:23.242 --rc geninfo_unexecuted_blocks=1 00:23:23.242 00:23:23.242 ' 00:23:23.242 04:19:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:23.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.242 --rc genhtml_branch_coverage=1 00:23:23.242 --rc genhtml_function_coverage=1 00:23:23.242 --rc genhtml_legend=1 00:23:23.242 --rc geninfo_all_blocks=1 00:23:23.242 --rc geninfo_unexecuted_blocks=1 00:23:23.242 00:23:23.242 ' 00:23:23.242 04:19:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:23.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.242 --rc genhtml_branch_coverage=1 00:23:23.242 --rc genhtml_function_coverage=1 00:23:23.242 --rc genhtml_legend=1 00:23:23.242 --rc geninfo_all_blocks=1 00:23:23.242 --rc geninfo_unexecuted_blocks=1 00:23:23.242 00:23:23.242 ' 00:23:23.242 04:19:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:23.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.242 --rc genhtml_branch_coverage=1 00:23:23.242 --rc genhtml_function_coverage=1 00:23:23.242 --rc genhtml_legend=1 00:23:23.242 --rc geninfo_all_blocks=1 00:23:23.242 --rc geninfo_unexecuted_blocks=1 00:23:23.242 00:23:23.242 ' 00:23:23.242 04:19:24 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:23.242 04:19:24 -- nvmf/common.sh@7 -- # uname -s 00:23:23.242 04:19:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.242 04:19:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.242 04:19:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.242 04:19:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.242 04:19:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.242 04:19:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.242 04:19:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.242 04:19:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.242 04:19:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.242 04:19:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.242 04:19:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:23:23.242 04:19:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:23:23.242 04:19:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.242 04:19:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.242 04:19:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:23.242 04:19:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:23.242 04:19:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.242 04:19:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.242 04:19:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.242 04:19:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.242 04:19:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.243 04:19:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.243 04:19:24 -- paths/export.sh@5 -- # export PATH 00:23:23.243 04:19:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.243 04:19:24 -- nvmf/common.sh@46 -- # : 0 00:23:23.243 04:19:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:23.243 04:19:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:23.243 04:19:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:23.243 04:19:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.243 04:19:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.243 04:19:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:23.243 04:19:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:23.243 04:19:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:23.243 04:19:25 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:23.243 04:19:25 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:23.243 04:19:25 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:23.243 04:19:25 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:23.243 04:19:25 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:23.502 04:19:25 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:23.502 04:19:25 -- host/multipath.sh@30 -- # nvmftestinit 00:23:23.502 04:19:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:23.502 04:19:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.502 04:19:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:23.502 04:19:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:23.502 04:19:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:23.502 04:19:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.502 04:19:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:23.502 04:19:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.502 04:19:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:23.502 04:19:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:23.502 04:19:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:23.502 04:19:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:23.502 04:19:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:23.502 04:19:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:23.502 04:19:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.502 04:19:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.502 04:19:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:23.502 04:19:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:23.502 04:19:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:23.502 04:19:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:23.502 04:19:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:23.502 04:19:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.502 04:19:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:23.502 04:19:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:23.502 04:19:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:23.502 04:19:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:23.502 04:19:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:23.502 04:19:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:23.502 Cannot find device "nvmf_tgt_br" 00:23:23.502 04:19:25 -- nvmf/common.sh@154 -- # true 00:23:23.502 04:19:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:23.502 Cannot find device "nvmf_tgt_br2" 00:23:23.502 04:19:25 -- nvmf/common.sh@155 -- # true 00:23:23.502 04:19:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:23.502 04:19:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:23.502 Cannot find device "nvmf_tgt_br" 00:23:23.502 04:19:25 -- nvmf/common.sh@157 -- # true 00:23:23.502 04:19:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:23.502 Cannot find device "nvmf_tgt_br2" 00:23:23.502 04:19:25 -- nvmf/common.sh@158 -- # true 00:23:23.502 04:19:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:23.502 04:19:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:23.502 04:19:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:23.502 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:23.502 04:19:25 -- nvmf/common.sh@161 -- # true 00:23:23.502 04:19:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:23.502 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:23.502 04:19:25 -- nvmf/common.sh@162 -- # true 00:23:23.502 04:19:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:23.502 04:19:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:23.502 04:19:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:23.502 04:19:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:23.502 04:19:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:23.502 04:19:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:23.502 04:19:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:23.502 04:19:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:23.502 04:19:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:23.502 04:19:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:23.502 04:19:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:23.502 04:19:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:23.502 04:19:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:23.502 04:19:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:23.502 04:19:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:23.502 04:19:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:23.502 04:19:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:23.761 04:19:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:23.761 04:19:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:23.761 04:19:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:23.761 04:19:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:23.761 04:19:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:23.761 04:19:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:23.761 04:19:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:23.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:23:23.761 00:23:23.761 --- 10.0.0.2 ping statistics --- 00:23:23.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.761 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:23:23.761 04:19:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:23.761 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:23.761 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:23:23.761 00:23:23.761 --- 10.0.0.3 ping statistics --- 00:23:23.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.761 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:23:23.761 04:19:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:23.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:23:23.761 00:23:23.761 --- 10.0.0.1 ping statistics --- 00:23:23.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.761 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:23:23.761 04:19:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.761 04:19:25 -- nvmf/common.sh@421 -- # return 0 00:23:23.761 04:19:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:23.761 04:19:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.761 04:19:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:23.761 04:19:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:23.761 04:19:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.761 04:19:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:23.761 04:19:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:23.761 04:19:25 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:23.761 04:19:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:23.761 04:19:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:23.761 04:19:25 -- common/autotest_common.sh@10 -- # set +x 00:23:23.761 04:19:25 -- nvmf/common.sh@469 -- # nvmfpid=99049 00:23:23.761 04:19:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:23.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.761 04:19:25 -- nvmf/common.sh@470 -- # waitforlisten 99049 00:23:23.761 04:19:25 -- common/autotest_common.sh@829 -- # '[' -z 99049 ']' 00:23:23.761 04:19:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.761 04:19:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:23.761 04:19:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.761 04:19:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:23.761 04:19:25 -- common/autotest_common.sh@10 -- # set +x 00:23:23.761 [2024-11-26 04:19:25.417321] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:23.761 [2024-11-26 04:19:25.417419] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.020 [2024-11-26 04:19:25.553757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:24.020 [2024-11-26 04:19:25.622760] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:24.020 [2024-11-26 04:19:25.622901] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.020 [2024-11-26 04:19:25.622913] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.020 [2024-11-26 04:19:25.622921] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.020 [2024-11-26 04:19:25.623075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.020 [2024-11-26 04:19:25.623242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.957 04:19:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.957 04:19:26 -- common/autotest_common.sh@862 -- # return 0 00:23:24.957 04:19:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:24.957 04:19:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:24.957 04:19:26 -- common/autotest_common.sh@10 -- # set +x 00:23:24.957 04:19:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.957 04:19:26 -- host/multipath.sh@33 -- # nvmfapp_pid=99049 00:23:24.957 04:19:26 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:25.216 [2024-11-26 04:19:26.752801] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.216 04:19:26 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:25.475 Malloc0 00:23:25.475 04:19:27 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:25.475 04:19:27 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:25.734 04:19:27 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:25.993 [2024-11-26 04:19:27.655530] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.993 04:19:27 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:26.251 [2024-11-26 04:19:27.911705] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:26.251 04:19:27 -- host/multipath.sh@44 -- # bdevperf_pid=99157 00:23:26.251 04:19:27 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:26.251 04:19:27 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:26.251 04:19:27 -- host/multipath.sh@47 -- # waitforlisten 99157 /var/tmp/bdevperf.sock 00:23:26.251 04:19:27 -- common/autotest_common.sh@829 -- # '[' -z 99157 ']' 00:23:26.251 04:19:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.251 04:19:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:26.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.251 04:19:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.251 04:19:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:26.251 04:19:27 -- common/autotest_common.sh@10 -- # set +x 00:23:27.640 04:19:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:27.640 04:19:28 -- common/autotest_common.sh@862 -- # return 0 00:23:27.640 04:19:28 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:27.640 04:19:29 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:27.903 Nvme0n1 00:23:27.903 04:19:29 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:28.161 Nvme0n1 00:23:28.161 04:19:29 -- host/multipath.sh@78 -- # sleep 1 00:23:28.161 04:19:29 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:29.096 04:19:30 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:29.097 04:19:30 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:29.355 04:19:31 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:29.614 04:19:31 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:29.614 04:19:31 -- host/multipath.sh@65 -- # dtrace_pid=99240 00:23:29.614 04:19:31 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99049 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:29.614 04:19:31 -- host/multipath.sh@66 -- # sleep 6 00:23:36.182 04:19:37 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:36.182 04:19:37 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:36.182 04:19:37 -- host/multipath.sh@67 -- # active_port=4421 00:23:36.182 04:19:37 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:36.182 Attaching 4 probes... 00:23:36.182 @path[10.0.0.2, 4421]: 22654 00:23:36.182 @path[10.0.0.2, 4421]: 23277 00:23:36.182 @path[10.0.0.2, 4421]: 23222 00:23:36.182 @path[10.0.0.2, 4421]: 23226 00:23:36.182 @path[10.0.0.2, 4421]: 23262 00:23:36.182 04:19:37 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:36.182 04:19:37 -- host/multipath.sh@69 -- # sed -n 1p 00:23:36.182 04:19:37 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:36.182 04:19:37 -- host/multipath.sh@69 -- # port=4421 00:23:36.182 04:19:37 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:36.182 04:19:37 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:36.182 04:19:37 -- host/multipath.sh@72 -- # kill 99240 00:23:36.182 04:19:37 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:36.182 04:19:37 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:36.182 04:19:37 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:36.182 04:19:37 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:36.442 04:19:37 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:36.442 04:19:37 -- host/multipath.sh@65 -- # dtrace_pid=99371 00:23:36.442 04:19:37 -- host/multipath.sh@66 -- # sleep 6 00:23:36.442 04:19:37 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99049 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:43.061 04:19:43 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:43.061 04:19:43 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:43.061 04:19:44 -- host/multipath.sh@67 -- # active_port=4420 00:23:43.061 04:19:44 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:43.061 Attaching 4 probes... 00:23:43.061 @path[10.0.0.2, 4420]: 23108 00:23:43.061 @path[10.0.0.2, 4420]: 23231 00:23:43.061 @path[10.0.0.2, 4420]: 23359 00:23:43.061 @path[10.0.0.2, 4420]: 23281 00:23:43.061 @path[10.0.0.2, 4420]: 23368 00:23:43.061 04:19:44 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:43.061 04:19:44 -- host/multipath.sh@69 -- # sed -n 1p 00:23:43.061 04:19:44 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:43.061 04:19:44 -- host/multipath.sh@69 -- # port=4420 00:23:43.061 04:19:44 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:43.061 04:19:44 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:43.061 04:19:44 -- host/multipath.sh@72 -- # kill 99371 00:23:43.061 04:19:44 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:43.061 04:19:44 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:43.061 04:19:44 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:43.061 04:19:44 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:43.061 04:19:44 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:43.061 04:19:44 -- host/multipath.sh@65 -- # dtrace_pid=99507 00:23:43.061 04:19:44 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99049 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:43.061 04:19:44 -- host/multipath.sh@66 -- # sleep 6 00:23:49.638 04:19:50 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:49.638 04:19:50 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:49.638 04:19:50 -- host/multipath.sh@67 -- # active_port=4421 00:23:49.638 04:19:50 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:49.638 Attaching 4 probes... 00:23:49.638 @path[10.0.0.2, 4421]: 15090 00:23:49.638 @path[10.0.0.2, 4421]: 20934 00:23:49.638 @path[10.0.0.2, 4421]: 21253 00:23:49.638 @path[10.0.0.2, 4421]: 21071 00:23:49.638 @path[10.0.0.2, 4421]: 20798 00:23:49.638 04:19:50 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:49.638 04:19:50 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:49.638 04:19:50 -- host/multipath.sh@69 -- # sed -n 1p 00:23:49.638 04:19:50 -- host/multipath.sh@69 -- # port=4421 00:23:49.638 04:19:50 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:49.638 04:19:50 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:49.638 04:19:50 -- host/multipath.sh@72 -- # kill 99507 00:23:49.638 04:19:50 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:49.638 04:19:50 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:23:49.638 04:19:50 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:49.638 04:19:51 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:49.898 04:19:51 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:23:49.898 04:19:51 -- host/multipath.sh@65 -- # dtrace_pid=99639 00:23:49.898 04:19:51 -- host/multipath.sh@66 -- # sleep 6 00:23:49.898 04:19:51 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99049 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:56.462 04:19:57 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:56.462 04:19:57 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:23:56.463 04:19:57 -- host/multipath.sh@67 -- # active_port= 00:23:56.463 04:19:57 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:56.463 Attaching 4 probes... 00:23:56.463 00:23:56.463 00:23:56.463 00:23:56.463 00:23:56.463 00:23:56.463 04:19:57 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:56.463 04:19:57 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:56.463 04:19:57 -- host/multipath.sh@69 -- # sed -n 1p 00:23:56.463 04:19:57 -- host/multipath.sh@69 -- # port= 00:23:56.463 04:19:57 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:23:56.463 04:19:57 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:23:56.463 04:19:57 -- host/multipath.sh@72 -- # kill 99639 00:23:56.463 04:19:57 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:56.463 04:19:57 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:23:56.463 04:19:57 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:56.463 04:19:57 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:56.463 04:19:58 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:23:56.463 04:19:58 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99049 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:56.463 04:19:58 -- host/multipath.sh@65 -- # dtrace_pid=99769 00:23:56.463 04:19:58 -- host/multipath.sh@66 -- # sleep 6 00:24:03.031 04:20:04 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:03.031 04:20:04 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:03.031 04:20:04 -- host/multipath.sh@67 -- # active_port=4421 00:24:03.031 04:20:04 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:03.031 Attaching 4 probes... 00:24:03.031 @path[10.0.0.2, 4421]: 21957 00:24:03.031 @path[10.0.0.2, 4421]: 22106 00:24:03.031 @path[10.0.0.2, 4421]: 22388 00:24:03.031 @path[10.0.0.2, 4421]: 20687 00:24:03.031 @path[10.0.0.2, 4421]: 20652 00:24:03.031 04:20:04 -- host/multipath.sh@69 -- # sed -n 1p 00:24:03.031 04:20:04 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:03.031 04:20:04 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:03.031 04:20:04 -- host/multipath.sh@69 -- # port=4421 00:24:03.031 04:20:04 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:03.031 04:20:04 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:03.031 04:20:04 -- host/multipath.sh@72 -- # kill 99769 00:24:03.031 04:20:04 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:03.031 04:20:04 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:03.031 [2024-11-26 04:20:04.712013] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.031 [2024-11-26 04:20:04.712131] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.031 [2024-11-26 04:20:04.712159] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.031 [2024-11-26 04:20:04.712167] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.031 [2024-11-26 04:20:04.712175] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.031 [2024-11-26 04:20:04.712183] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.031 [2024-11-26 04:20:04.712191] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.031 [2024-11-26 04:20:04.712198] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.031 [2024-11-26 04:20:04.712206] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.031 [2024-11-26 04:20:04.712214] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.031 [2024-11-26 04:20:04.712221] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.031 [2024-11-26 04:20:04.712229] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.031 [2024-11-26 04:20:04.712236] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.031 [2024-11-26 04:20:04.712244] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712251] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712259] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712266] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712274] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712281] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712288] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712295] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712302] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712309] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712318] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712342] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712365] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712390] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712400] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712408] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712417] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712426] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712434] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712443] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712451] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712460] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712468] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712477] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712485] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712501] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712510] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712518] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712526] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712535] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712542] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712551] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712566] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712574] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712582] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712592] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712599] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712608] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712616] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712624] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712633] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712641] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712650] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712659] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712667] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712682] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712690] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712698] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712707] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712735] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712766] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712780] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712789] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712810] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712822] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712831] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712839] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712848] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 [2024-11-26 04:20:04.712857] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7370 is same with the state(5) to be set 00:24:03.032 04:20:04 -- host/multipath.sh@101 -- # sleep 1 00:24:03.969 04:20:05 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:24:04.228 04:20:05 -- host/multipath.sh@65 -- # dtrace_pid=99905 00:24:04.228 04:20:05 -- host/multipath.sh@66 -- # sleep 6 00:24:04.228 04:20:05 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99049 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:10.797 04:20:11 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:10.797 04:20:11 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:10.797 04:20:11 -- host/multipath.sh@67 -- # active_port=4420 00:24:10.797 04:20:11 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:10.797 Attaching 4 probes... 00:24:10.797 @path[10.0.0.2, 4420]: 21980 00:24:10.797 @path[10.0.0.2, 4420]: 21448 00:24:10.797 @path[10.0.0.2, 4420]: 21436 00:24:10.797 @path[10.0.0.2, 4420]: 21261 00:24:10.797 @path[10.0.0.2, 4420]: 21387 00:24:10.797 04:20:11 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:10.797 04:20:11 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:10.797 04:20:11 -- host/multipath.sh@69 -- # sed -n 1p 00:24:10.797 04:20:11 -- host/multipath.sh@69 -- # port=4420 00:24:10.797 04:20:11 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:10.797 04:20:11 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:10.797 04:20:11 -- host/multipath.sh@72 -- # kill 99905 00:24:10.797 04:20:11 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:10.797 04:20:11 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:10.797 [2024-11-26 04:20:12.241035] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:10.797 04:20:12 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:10.797 04:20:12 -- host/multipath.sh@111 -- # sleep 6 00:24:17.363 04:20:18 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:17.363 04:20:18 -- host/multipath.sh@65 -- # dtrace_pid=100097 00:24:17.363 04:20:18 -- host/multipath.sh@66 -- # sleep 6 00:24:17.363 04:20:18 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99049 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:23.943 04:20:24 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:23.943 04:20:24 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:23.943 04:20:24 -- host/multipath.sh@67 -- # active_port=4421 00:24:23.943 04:20:24 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:23.943 Attaching 4 probes... 00:24:23.943 @path[10.0.0.2, 4421]: 19889 00:24:23.943 @path[10.0.0.2, 4421]: 20440 00:24:23.943 @path[10.0.0.2, 4421]: 20460 00:24:23.943 @path[10.0.0.2, 4421]: 20398 00:24:23.943 @path[10.0.0.2, 4421]: 20392 00:24:23.943 04:20:24 -- host/multipath.sh@69 -- # sed -n 1p 00:24:23.943 04:20:24 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:23.943 04:20:24 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:23.943 04:20:24 -- host/multipath.sh@69 -- # port=4421 00:24:23.943 04:20:24 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:23.943 04:20:24 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:23.943 04:20:24 -- host/multipath.sh@72 -- # kill 100097 00:24:23.943 04:20:24 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:23.943 04:20:24 -- host/multipath.sh@114 -- # killprocess 99157 00:24:23.943 04:20:24 -- common/autotest_common.sh@936 -- # '[' -z 99157 ']' 00:24:23.943 04:20:24 -- common/autotest_common.sh@940 -- # kill -0 99157 00:24:23.943 04:20:24 -- common/autotest_common.sh@941 -- # uname 00:24:23.943 04:20:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:23.943 04:20:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99157 00:24:23.943 killing process with pid 99157 00:24:23.943 04:20:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:23.943 04:20:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:23.943 04:20:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99157' 00:24:23.943 04:20:24 -- common/autotest_common.sh@955 -- # kill 99157 00:24:23.943 04:20:24 -- common/autotest_common.sh@960 -- # wait 99157 00:24:23.943 Connection closed with partial response: 00:24:23.943 00:24:23.943 00:24:23.943 04:20:25 -- host/multipath.sh@116 -- # wait 99157 00:24:23.943 04:20:25 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:23.943 [2024-11-26 04:19:27.968786] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:23.943 [2024-11-26 04:19:27.968880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99157 ] 00:24:23.943 [2024-11-26 04:19:28.105690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.943 [2024-11-26 04:19:28.174911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.943 Running I/O for 90 seconds... 00:24:23.943 [2024-11-26 04:19:37.944804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.943 [2024-11-26 04:19:37.944862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:23.943 [2024-11-26 04:19:37.944922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.943 [2024-11-26 04:19:37.944949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:23.943 [2024-11-26 04:19:37.944972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.943 [2024-11-26 04:19:37.944986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:23.943 [2024-11-26 04:19:37.945008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.943 [2024-11-26 04:19:37.945022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:23.943 [2024-11-26 04:19:37.945043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.943 [2024-11-26 04:19:37.945057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:23.943 [2024-11-26 04:19:37.945093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.943 [2024-11-26 04:19:37.945138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:23.943 [2024-11-26 04:19:37.945334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.943 [2024-11-26 04:19:37.945357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:23.943 [2024-11-26 04:19:37.945379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.943 [2024-11-26 04:19:37.945393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:23.943 [2024-11-26 04:19:37.945411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.943 [2024-11-26 04:19:37.945423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:23.943 [2024-11-26 04:19:37.945441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.943 [2024-11-26 04:19:37.945453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:23.943 [2024-11-26 04:19:37.945470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.943 [2024-11-26 04:19:37.945505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:23.943 [2024-11-26 04:19:37.945526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.943 [2024-11-26 04:19:37.945539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:23.943 [2024-11-26 04:19:37.945556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.943 [2024-11-26 04:19:37.945568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:23.943 [2024-11-26 04:19:37.945585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.944 [2024-11-26 04:19:37.945598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.945616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.944 [2024-11-26 04:19:37.945628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.945645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.944 [2024-11-26 04:19:37.945658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.945675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.944 [2024-11-26 04:19:37.945689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.945706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.944 [2024-11-26 04:19:37.945718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.945754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.944 [2024-11-26 04:19:37.945781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.946639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.944 [2024-11-26 04:19:37.946660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.946679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:88424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.944 [2024-11-26 04:19:37.946692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.946709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.944 [2024-11-26 04:19:37.946721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.946769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.944 [2024-11-26 04:19:37.946787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.946818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:88448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.944 [2024-11-26 04:19:37.946835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.946855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.944 [2024-11-26 04:19:37.946869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.946889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.944 [2024-11-26 04:19:37.946903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.946923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.944 [2024-11-26 04:19:37.946937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.946957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.944 [2024-11-26 04:19:37.946971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.946990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.944 [2024-11-26 04:19:37.947004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.947024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.944 [2024-11-26 04:19:37.947038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.947057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.944 [2024-11-26 04:19:37.947101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.947134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.944 [2024-11-26 04:19:37.947146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.947164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.944 [2024-11-26 04:19:37.947177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.947196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.944 [2024-11-26 04:19:37.947208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.947226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.944 [2024-11-26 04:19:37.947239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.947263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.944 [2024-11-26 04:19:37.947289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.947307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.944 [2024-11-26 04:19:37.947320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.947338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.944 [2024-11-26 04:19:37.947350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.947370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.944 [2024-11-26 04:19:37.947383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.947890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.944 [2024-11-26 04:19:37.947917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.947943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.944 [2024-11-26 04:19:37.947959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.947979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:88528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.944 [2024-11-26 04:19:37.947995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.948015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.944 [2024-11-26 04:19:37.948030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.948065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.944 [2024-11-26 04:19:37.948093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.948128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.944 [2024-11-26 04:19:37.948141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.948159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.944 [2024-11-26 04:19:37.948172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.948191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.944 [2024-11-26 04:19:37.948204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.948222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.944 [2024-11-26 04:19:37.948244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.948264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.944 [2024-11-26 04:19:37.948277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.948296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.944 [2024-11-26 04:19:37.948309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.948327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.944 [2024-11-26 04:19:37.948340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.948358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.944 [2024-11-26 04:19:37.948372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:23.944 [2024-11-26 04:19:37.948390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.945 [2024-11-26 04:19:37.948403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.948421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.948434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.948456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.948469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.948487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.945 [2024-11-26 04:19:37.948500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.948518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.945 [2024-11-26 04:19:37.948531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.948549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.948561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.948579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.948592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.948610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:88608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.948629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.948648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:88616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.948661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.948679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.945 [2024-11-26 04:19:37.948692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.948710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:88632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.948739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.948775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.948804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.948826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.948840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.948860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:88656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.948874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.948895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.945 [2024-11-26 04:19:37.948909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.948929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.945 [2024-11-26 04:19:37.948944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.948963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.945 [2024-11-26 04:19:37.948978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.948998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.945 [2024-11-26 04:19:37.949012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.949035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.945 [2024-11-26 04:19:37.949049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.949099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.949128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.949153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.949167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.949185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.949198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.949216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:88048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.949229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.949247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:88072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.949260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.949278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.949291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.949309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.949322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.949339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:88136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.949352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.949370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:88144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.949383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.949401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:88160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.949414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.949432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:88192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.949445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.949464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:88208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.949476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.949494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.949507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.949532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.949546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.949564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.949577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.949595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.949609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.949627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.949640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.949658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.949672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.949690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:88720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.945 [2024-11-26 04:19:37.949704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:23.945 [2024-11-26 04:19:37.949738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.945 [2024-11-26 04:19:37.949752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:37.949787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.946 [2024-11-26 04:19:37.949805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:37.949825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.946 [2024-11-26 04:19:37.949840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:37.949861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.946 [2024-11-26 04:19:37.949875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:37.949895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.946 [2024-11-26 04:19:37.949910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:37.949931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.946 [2024-11-26 04:19:37.949945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:37.949965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:37.949987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:37.950020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.946 [2024-11-26 04:19:37.950037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:37.950057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.946 [2024-11-26 04:19:37.950072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.481336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.946 [2024-11-26 04:19:44.481389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.481457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.946 [2024-11-26 04:19:44.481475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.481496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.481510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.481529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.481542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.481561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.946 [2024-11-26 04:19:44.481574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.481592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.946 [2024-11-26 04:19:44.481605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.481624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.481637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.481655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.481668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.481686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.481699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.481733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.481798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.481823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.946 [2024-11-26 04:19:44.481838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.481860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.946 [2024-11-26 04:19:44.481875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.481895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.481910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.481930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.481945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.481965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.481980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.482009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.482027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.482053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:108576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.482067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.482089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:108592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.482103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.482124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.482138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.482164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.482179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.482200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.482215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.482236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.482259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.482326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.482343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.482361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.482374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.482392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.482405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.482423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.482436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.482454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.482466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.482484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.482497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.482515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.482528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.482546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.482559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:23.946 [2024-11-26 04:19:44.482577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.946 [2024-11-26 04:19:44.482606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.482625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.947 [2024-11-26 04:19:44.482638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.482656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.947 [2024-11-26 04:19:44.482669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.482705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.947 [2024-11-26 04:19:44.482735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.482763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.947 [2024-11-26 04:19:44.482779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.483113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.947 [2024-11-26 04:19:44.483139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.483165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.947 [2024-11-26 04:19:44.483180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.483202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.947 [2024-11-26 04:19:44.483217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.483238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.947 [2024-11-26 04:19:44.483252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.483273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.947 [2024-11-26 04:19:44.483287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.483308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.947 [2024-11-26 04:19:44.483322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.483343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.947 [2024-11-26 04:19:44.483357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.483379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.947 [2024-11-26 04:19:44.483392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.483414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.947 [2024-11-26 04:19:44.483428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.483449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.947 [2024-11-26 04:19:44.483463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.483484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.947 [2024-11-26 04:19:44.483497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.483529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.947 [2024-11-26 04:19:44.483544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.483566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.947 [2024-11-26 04:19:44.483580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.483602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.947 [2024-11-26 04:19:44.483616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.483638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.947 [2024-11-26 04:19:44.483652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.483673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.947 [2024-11-26 04:19:44.483687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.483709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.947 [2024-11-26 04:19:44.483738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.483778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.947 [2024-11-26 04:19:44.483808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.483833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.947 [2024-11-26 04:19:44.483852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.483876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.947 [2024-11-26 04:19:44.483891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.483915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.947 [2024-11-26 04:19:44.483930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.483954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.947 [2024-11-26 04:19:44.483969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.483992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.947 [2024-11-26 04:19:44.484007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.484030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.947 [2024-11-26 04:19:44.484057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.484082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.947 [2024-11-26 04:19:44.484127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.484165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.947 [2024-11-26 04:19:44.484179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.484203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.947 [2024-11-26 04:19:44.484217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:23.947 [2024-11-26 04:19:44.484240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.947 [2024-11-26 04:19:44.484254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.484276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.948 [2024-11-26 04:19:44.484290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.484312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.948 [2024-11-26 04:19:44.484327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.484350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.948 [2024-11-26 04:19:44.484364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.484386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.948 [2024-11-26 04:19:44.484401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.484423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.948 [2024-11-26 04:19:44.484437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.484473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.948 [2024-11-26 04:19:44.484487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.484509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.948 [2024-11-26 04:19:44.484523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.484544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.948 [2024-11-26 04:19:44.484564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.484586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.948 [2024-11-26 04:19:44.484601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.484622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.948 [2024-11-26 04:19:44.484636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.484658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.948 [2024-11-26 04:19:44.484671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.484693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.948 [2024-11-26 04:19:44.484707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.484761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.948 [2024-11-26 04:19:44.484776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.484812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.948 [2024-11-26 04:19:44.484830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.484856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.948 [2024-11-26 04:19:44.484872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.484896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.948 [2024-11-26 04:19:44.484911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.484934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.948 [2024-11-26 04:19:44.484949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.484973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.948 [2024-11-26 04:19:44.484988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.485011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.948 [2024-11-26 04:19:44.485027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.485050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.948 [2024-11-26 04:19:44.485073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.485098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.948 [2024-11-26 04:19:44.485127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.485163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.948 [2024-11-26 04:19:44.485178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.485199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.948 [2024-11-26 04:19:44.485213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.485235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.948 [2024-11-26 04:19:44.485249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.485270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.948 [2024-11-26 04:19:44.485283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.485305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.948 [2024-11-26 04:19:44.485318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.485340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.948 [2024-11-26 04:19:44.485353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.485375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.948 [2024-11-26 04:19:44.485389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.485551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.948 [2024-11-26 04:19:44.485574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.485602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.948 [2024-11-26 04:19:44.485617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.485643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.948 [2024-11-26 04:19:44.485657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.485682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.948 [2024-11-26 04:19:44.485695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.485763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.948 [2024-11-26 04:19:44.485794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.485822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.948 [2024-11-26 04:19:44.485838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.485865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.948 [2024-11-26 04:19:44.485881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.485908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.948 [2024-11-26 04:19:44.485923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.485949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.948 [2024-11-26 04:19:44.485965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.485992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.948 [2024-11-26 04:19:44.486043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:23.948 [2024-11-26 04:19:44.486079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.948 [2024-11-26 04:19:44.486096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:44.486130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:44.486145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:44.486173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:44.486188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:44.486216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:44.486231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:44.486258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:44.486274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:44.486329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:44.486367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:44.486400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:44.486416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:44.486441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:44.486455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:44.486481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:44.486495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:44.486520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:44.486542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:44.486568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:44.486582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:44.486607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.949 [2024-11-26 04:19:44.486622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:44.486646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:44.486660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:44.486686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:44.486699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:44.486740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:44.486754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:44.486780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.949 [2024-11-26 04:19:44.486808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:44.486841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:44.486857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:44.486883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.949 [2024-11-26 04:19:44.486898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:44.486932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.949 [2024-11-26 04:19:44.486948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:44.486974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.949 [2024-11-26 04:19:44.486988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:44.487014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:44.487028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:44.487054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.949 [2024-11-26 04:19:44.487068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:44.487108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.949 [2024-11-26 04:19:44.487122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:44.487147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.949 [2024-11-26 04:19:44.487161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:44.487186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.949 [2024-11-26 04:19:44.487200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:51.404635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:105088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.949 [2024-11-26 04:19:51.404695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:51.404787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:51.404808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:51.404830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:51.404844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:51.404864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:51.404879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:51.404899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:51.404913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:51.404933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:51.404964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:51.404986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:51.405001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:51.405020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:51.405034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:51.405069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:51.405083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:51.405131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:51.405158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:51.405175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:51.405187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:51.405204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:51.405216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:51.405233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:51.405245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:23.949 [2024-11-26 04:19:51.405262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:104600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.949 [2024-11-26 04:19:51.405274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.405291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.950 [2024-11-26 04:19:51.405303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.405319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.950 [2024-11-26 04:19:51.405332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.405348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.950 [2024-11-26 04:19:51.405361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.405381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:105096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.950 [2024-11-26 04:19:51.405418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.405437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:105104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.950 [2024-11-26 04:19:51.405451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.405469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.950 [2024-11-26 04:19:51.405482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.405499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.950 [2024-11-26 04:19:51.405512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.405529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:105128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.950 [2024-11-26 04:19:51.405542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.405560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.950 [2024-11-26 04:19:51.405572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.405590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:105144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.950 [2024-11-26 04:19:51.405603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.405621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.950 [2024-11-26 04:19:51.405634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.405651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.950 [2024-11-26 04:19:51.405664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.405681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.950 [2024-11-26 04:19:51.405693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.405710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.950 [2024-11-26 04:19:51.405785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.405807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.950 [2024-11-26 04:19:51.405822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.405841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.950 [2024-11-26 04:19:51.405856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.405884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.950 [2024-11-26 04:19:51.405899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.406234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.950 [2024-11-26 04:19:51.406262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.406317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.950 [2024-11-26 04:19:51.406332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.406355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.950 [2024-11-26 04:19:51.406369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.406390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.950 [2024-11-26 04:19:51.406404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.406425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.950 [2024-11-26 04:19:51.406438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.406459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.950 [2024-11-26 04:19:51.406473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.406494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.950 [2024-11-26 04:19:51.406507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.406528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.950 [2024-11-26 04:19:51.406541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.406574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.950 [2024-11-26 04:19:51.406587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.406608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:105280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.950 [2024-11-26 04:19:51.406621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.406641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.950 [2024-11-26 04:19:51.406655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.406685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.950 [2024-11-26 04:19:51.406699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.406735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.950 [2024-11-26 04:19:51.406765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.406800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.950 [2024-11-26 04:19:51.406819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.406842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.950 [2024-11-26 04:19:51.406857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.406879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.950 [2024-11-26 04:19:51.406893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.406915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.950 [2024-11-26 04:19:51.406929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.406951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.950 [2024-11-26 04:19:51.406965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.406989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.950 [2024-11-26 04:19:51.407003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.407025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.950 [2024-11-26 04:19:51.407039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.407061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.950 [2024-11-26 04:19:51.407090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:23.950 [2024-11-26 04:19:51.407141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.950 [2024-11-26 04:19:51.407154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.407175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.951 [2024-11-26 04:19:51.407188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.407217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.951 [2024-11-26 04:19:51.407231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.407251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:105336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.951 [2024-11-26 04:19:51.407265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.407285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.951 [2024-11-26 04:19:51.407299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.407319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.951 [2024-11-26 04:19:51.407333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.407353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.951 [2024-11-26 04:19:51.407366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.407387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.951 [2024-11-26 04:19:51.407400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.407420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.951 [2024-11-26 04:19:51.407433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.407454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.951 [2024-11-26 04:19:51.407467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.407488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.951 [2024-11-26 04:19:51.407501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.407521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.951 [2024-11-26 04:19:51.407534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.407555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.951 [2024-11-26 04:19:51.407568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.407589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.951 [2024-11-26 04:19:51.407602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.407623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.951 [2024-11-26 04:19:51.407642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.407663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.951 [2024-11-26 04:19:51.407677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.407697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.951 [2024-11-26 04:19:51.407711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.407764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:105384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.951 [2024-11-26 04:19:51.407779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.407816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:105392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.951 [2024-11-26 04:19:51.407831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.407854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:105400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.951 [2024-11-26 04:19:51.407868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.407890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:105408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.951 [2024-11-26 04:19:51.407904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.407926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.951 [2024-11-26 04:19:51.407941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.407962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.951 [2024-11-26 04:19:51.407977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.407998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.951 [2024-11-26 04:19:51.408013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.408035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:105440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.951 [2024-11-26 04:19:51.408049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.408072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:105448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.951 [2024-11-26 04:19:51.408086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.408138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.951 [2024-11-26 04:19:51.408157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.408179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:105464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.951 [2024-11-26 04:19:51.408192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.408212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:105472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.951 [2024-11-26 04:19:51.408225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.408246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.951 [2024-11-26 04:19:51.408260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.408281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.951 [2024-11-26 04:19:51.408294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.408315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:105496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.951 [2024-11-26 04:19:51.408328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.408348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.951 [2024-11-26 04:19:51.408362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.408389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.951 [2024-11-26 04:19:51.408403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.408424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.951 [2024-11-26 04:19:51.408437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.408457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.951 [2024-11-26 04:19:51.408470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.408490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.951 [2024-11-26 04:19:51.408503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.408524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.951 [2024-11-26 04:19:51.408538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.408558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.951 [2024-11-26 04:19:51.408571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:23.951 [2024-11-26 04:19:51.408603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.952 [2024-11-26 04:19:51.408617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.408814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:105568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.952 [2024-11-26 04:19:51.408837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.408867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.952 [2024-11-26 04:19:51.408883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.408910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.952 [2024-11-26 04:19:51.408925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.408952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.952 [2024-11-26 04:19:51.408966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.408992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.952 [2024-11-26 04:19:51.409006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.409033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.952 [2024-11-26 04:19:51.409047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.409073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.952 [2024-11-26 04:19:51.409109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.409149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.952 [2024-11-26 04:19:51.409162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.409187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.952 [2024-11-26 04:19:51.409200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.409229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.952 [2024-11-26 04:19:51.409243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.409267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.952 [2024-11-26 04:19:51.409280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.409313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.952 [2024-11-26 04:19:51.409327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.409351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.952 [2024-11-26 04:19:51.409364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.409388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.952 [2024-11-26 04:19:51.409401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.409425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.952 [2024-11-26 04:19:51.409438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.409468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.952 [2024-11-26 04:19:51.409481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.409506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.952 [2024-11-26 04:19:51.409519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.409543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.952 [2024-11-26 04:19:51.409555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.409580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.952 [2024-11-26 04:19:51.409593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.409618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.952 [2024-11-26 04:19:51.409631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.409655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.952 [2024-11-26 04:19:51.409668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.409692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.952 [2024-11-26 04:19:51.409705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.409761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.952 [2024-11-26 04:19:51.409788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.409826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.952 [2024-11-26 04:19:51.409842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.409869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.952 [2024-11-26 04:19:51.409883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.409914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.952 [2024-11-26 04:19:51.409929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.409955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.952 [2024-11-26 04:19:51.409969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:19:51.410005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.952 [2024-11-26 04:19:51.410022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:20:04.713153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.952 [2024-11-26 04:20:04.713195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:20:04.713227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.952 [2024-11-26 04:20:04.713242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:20:04.713257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.952 [2024-11-26 04:20:04.713271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:20:04.713285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.952 [2024-11-26 04:20:04.713298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.952 [2024-11-26 04:20:04.713312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.952 [2024-11-26 04:20:04.713325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.713339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.713352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.713367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.713380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.713405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.713435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.713450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.713463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.713477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.713489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.713503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.713515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.713529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.713541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.713555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.713567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.713596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.713608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.713621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.713633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.713646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.713659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.713672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.713687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.713701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.713713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.713761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.713774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.713789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.713817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.713842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.713858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.713873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.713886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.713901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.713914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.713930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.713943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.713958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.953 [2024-11-26 04:20:04.713972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.713986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.953 [2024-11-26 04:20:04.714010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.714027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.714041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.714056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.714070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.714085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.714098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.714113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.714126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.714141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.714155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.714170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.714183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.714198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.714212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.714235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.714250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.714271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.953 [2024-11-26 04:20:04.714315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.714328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.714341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.714354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.953 [2024-11-26 04:20:04.714367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.714380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.953 [2024-11-26 04:20:04.714393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.714407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.714420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.714433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.714446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.714460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.714474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.714488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.714500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.714514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.714526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.714540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.714552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.953 [2024-11-26 04:20:04.714566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.953 [2024-11-26 04:20:04.714578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.714592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.714609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.714623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.714636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.714649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.714662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.714676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.714688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.714702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.954 [2024-11-26 04:20:04.714731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.714757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.954 [2024-11-26 04:20:04.714782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.714798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.954 [2024-11-26 04:20:04.714812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.714827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.714841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.714856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.714870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.714885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.714898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.714913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.714927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.714942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.714955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.714970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.714984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.954 [2024-11-26 04:20:04.715020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.954 [2024-11-26 04:20:04.715048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.954 [2024-11-26 04:20:04.715107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.954 [2024-11-26 04:20:04.715148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.715174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.715200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.954 [2024-11-26 04:20:04.715226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.715253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.715279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.715306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.715333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.715359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.715390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.715417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.715443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.954 [2024-11-26 04:20:04.715468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.715494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.715521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.715547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.715572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.715598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.715624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.715651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.715677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.715703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.954 [2024-11-26 04:20:04.715768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.954 [2024-11-26 04:20:04.715792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.954 [2024-11-26 04:20:04.715808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.715823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.715837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.715852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.715866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.715881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.715895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.715910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.715924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.715939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.955 [2024-11-26 04:20:04.715952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.715967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.955 [2024-11-26 04:20:04.715981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.715995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.955 [2024-11-26 04:20:04.716009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.716037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.716066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.955 [2024-11-26 04:20:04.716109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.955 [2024-11-26 04:20:04.716174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.955 [2024-11-26 04:20:04.716208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.716234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.716260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.955 [2024-11-26 04:20:04.716287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.716318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.955 [2024-11-26 04:20:04.716344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.716370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.716395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.716421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.716447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.716473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.716499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.716530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.716556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.716582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.955 [2024-11-26 04:20:04.716613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.955 [2024-11-26 04:20:04.716639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.716665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.716691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.955 [2024-11-26 04:20:04.716733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.955 [2024-11-26 04:20:04.716790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.716822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.716850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.955 [2024-11-26 04:20:04.716878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.716907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.716943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.716971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.716986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.955 [2024-11-26 04:20:04.717000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.955 [2024-11-26 04:20:04.717014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.956 [2024-11-26 04:20:04.717027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.956 [2024-11-26 04:20:04.717042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.956 [2024-11-26 04:20:04.717061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.956 [2024-11-26 04:20:04.717075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.956 [2024-11-26 04:20:04.717103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.956 [2024-11-26 04:20:04.717136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fe060 is same with the state(5) to be set 00:24:23.956 [2024-11-26 04:20:04.717152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.956 [2024-11-26 04:20:04.717162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.956 [2024-11-26 04:20:04.717172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38360 len:8 PRP1 0x0 PRP2 0x0 00:24:23.956 [2024-11-26 04:20:04.717183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.956 [2024-11-26 04:20:04.717237] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13fe060 was disconnected and freed. reset controller. 00:24:23.956 [2024-11-26 04:20:04.718471] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.956 [2024-11-26 04:20:04.718550] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140fa00 (9): Bad file descriptor 00:24:23.956 [2024-11-26 04:20:04.718657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.956 [2024-11-26 04:20:04.718708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.956 [2024-11-26 04:20:04.718760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140fa00 with addr=10.0.0.2, port=4421 00:24:23.956 [2024-11-26 04:20:04.718781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fa00 is same with the state(5) to be set 00:24:23.956 [2024-11-26 04:20:04.718824] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140fa00 (9): Bad file descriptor 00:24:23.956 [2024-11-26 04:20:04.718846] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.956 [2024-11-26 04:20:04.718859] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.956 [2024-11-26 04:20:04.718874] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.956 [2024-11-26 04:20:04.718908] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.956 [2024-11-26 04:20:04.718923] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.956 [2024-11-26 04:20:14.773281] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:23.956 Received shutdown signal, test time was about 54.979909 seconds 00:24:23.956 00:24:23.956 Latency(us) 00:24:23.956 [2024-11-26T04:20:25.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.956 [2024-11-26T04:20:25.724Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:23.956 Verification LBA range: start 0x0 length 0x4000 00:24:23.956 Nvme0n1 : 54.98 12344.40 48.22 0.00 0.00 10353.39 467.32 7015926.69 00:24:23.956 [2024-11-26T04:20:25.724Z] =================================================================================================================== 00:24:23.956 [2024-11-26T04:20:25.724Z] Total : 12344.40 48.22 0.00 0.00 10353.39 467.32 7015926.69 00:24:23.956 04:20:25 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:23.956 04:20:25 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:23.956 04:20:25 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:23.956 04:20:25 -- host/multipath.sh@125 -- # nvmftestfini 00:24:23.956 04:20:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:23.956 04:20:25 -- nvmf/common.sh@116 -- # sync 00:24:23.956 04:20:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:23.956 04:20:25 -- nvmf/common.sh@119 -- # set +e 00:24:23.956 04:20:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:23.956 04:20:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:23.956 rmmod nvme_tcp 00:24:23.956 rmmod nvme_fabrics 00:24:23.956 rmmod nvme_keyring 00:24:23.956 04:20:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:23.956 04:20:25 -- nvmf/common.sh@123 -- # set -e 00:24:23.956 04:20:25 -- nvmf/common.sh@124 -- # return 0 00:24:23.956 04:20:25 -- nvmf/common.sh@477 -- # '[' -n 99049 ']' 00:24:23.956 04:20:25 -- nvmf/common.sh@478 -- # killprocess 99049 00:24:23.956 04:20:25 -- common/autotest_common.sh@936 -- # '[' -z 99049 ']' 00:24:23.956 04:20:25 -- common/autotest_common.sh@940 -- # kill -0 99049 00:24:23.956 04:20:25 -- common/autotest_common.sh@941 -- # uname 00:24:23.956 04:20:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:23.956 04:20:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99049 00:24:23.956 killing process with pid 99049 00:24:23.956 04:20:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:23.956 04:20:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:23.956 04:20:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99049' 00:24:23.956 04:20:25 -- common/autotest_common.sh@955 -- # kill 99049 00:24:23.956 04:20:25 -- common/autotest_common.sh@960 -- # wait 99049 00:24:24.216 04:20:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:24.216 04:20:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:24.216 04:20:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:24.216 04:20:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:24.216 04:20:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:24.216 04:20:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.216 04:20:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:24.216 04:20:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.216 04:20:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:24.216 00:24:24.216 real 1m1.058s 00:24:24.216 user 2m50.218s 00:24:24.216 sys 0m14.741s 00:24:24.216 04:20:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:24.216 04:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:24.216 ************************************ 00:24:24.216 END TEST nvmf_multipath 00:24:24.216 ************************************ 00:24:24.216 04:20:25 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:24.216 04:20:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:24.216 04:20:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:24.216 04:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:24.216 ************************************ 00:24:24.216 START TEST nvmf_timeout 00:24:24.216 ************************************ 00:24:24.216 04:20:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:24.216 * Looking for test storage... 00:24:24.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:24.476 04:20:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:24.476 04:20:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:24.476 04:20:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:24.476 04:20:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:24.476 04:20:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:24.476 04:20:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:24.476 04:20:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:24.476 04:20:26 -- scripts/common.sh@335 -- # IFS=.-: 00:24:24.476 04:20:26 -- scripts/common.sh@335 -- # read -ra ver1 00:24:24.476 04:20:26 -- scripts/common.sh@336 -- # IFS=.-: 00:24:24.476 04:20:26 -- scripts/common.sh@336 -- # read -ra ver2 00:24:24.476 04:20:26 -- scripts/common.sh@337 -- # local 'op=<' 00:24:24.476 04:20:26 -- scripts/common.sh@339 -- # ver1_l=2 00:24:24.476 04:20:26 -- scripts/common.sh@340 -- # ver2_l=1 00:24:24.476 04:20:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:24.476 04:20:26 -- scripts/common.sh@343 -- # case "$op" in 00:24:24.476 04:20:26 -- scripts/common.sh@344 -- # : 1 00:24:24.476 04:20:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:24.476 04:20:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:24.476 04:20:26 -- scripts/common.sh@364 -- # decimal 1 00:24:24.476 04:20:26 -- scripts/common.sh@352 -- # local d=1 00:24:24.476 04:20:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:24.476 04:20:26 -- scripts/common.sh@354 -- # echo 1 00:24:24.476 04:20:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:24.476 04:20:26 -- scripts/common.sh@365 -- # decimal 2 00:24:24.476 04:20:26 -- scripts/common.sh@352 -- # local d=2 00:24:24.476 04:20:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:24.476 04:20:26 -- scripts/common.sh@354 -- # echo 2 00:24:24.476 04:20:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:24.476 04:20:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:24.476 04:20:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:24.476 04:20:26 -- scripts/common.sh@367 -- # return 0 00:24:24.476 04:20:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:24.476 04:20:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:24.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.476 --rc genhtml_branch_coverage=1 00:24:24.476 --rc genhtml_function_coverage=1 00:24:24.476 --rc genhtml_legend=1 00:24:24.476 --rc geninfo_all_blocks=1 00:24:24.476 --rc geninfo_unexecuted_blocks=1 00:24:24.476 00:24:24.476 ' 00:24:24.476 04:20:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:24.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.476 --rc genhtml_branch_coverage=1 00:24:24.476 --rc genhtml_function_coverage=1 00:24:24.476 --rc genhtml_legend=1 00:24:24.476 --rc geninfo_all_blocks=1 00:24:24.476 --rc geninfo_unexecuted_blocks=1 00:24:24.476 00:24:24.476 ' 00:24:24.476 04:20:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:24.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.476 --rc genhtml_branch_coverage=1 00:24:24.476 --rc genhtml_function_coverage=1 00:24:24.476 --rc genhtml_legend=1 00:24:24.476 --rc geninfo_all_blocks=1 00:24:24.476 --rc geninfo_unexecuted_blocks=1 00:24:24.476 00:24:24.476 ' 00:24:24.476 04:20:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:24.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.476 --rc genhtml_branch_coverage=1 00:24:24.476 --rc genhtml_function_coverage=1 00:24:24.476 --rc genhtml_legend=1 00:24:24.476 --rc geninfo_all_blocks=1 00:24:24.476 --rc geninfo_unexecuted_blocks=1 00:24:24.476 00:24:24.476 ' 00:24:24.476 04:20:26 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:24.476 04:20:26 -- nvmf/common.sh@7 -- # uname -s 00:24:24.476 04:20:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:24.476 04:20:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:24.476 04:20:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:24.476 04:20:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:24.476 04:20:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:24.476 04:20:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:24.476 04:20:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:24.476 04:20:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:24.476 04:20:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:24.476 04:20:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:24.476 04:20:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:24:24.476 04:20:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:24:24.476 04:20:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:24.476 04:20:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:24.476 04:20:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:24.476 04:20:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:24.476 04:20:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:24.476 04:20:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:24.476 04:20:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:24.476 04:20:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.476 04:20:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.476 04:20:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.476 04:20:26 -- paths/export.sh@5 -- # export PATH 00:24:24.476 04:20:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.476 04:20:26 -- nvmf/common.sh@46 -- # : 0 00:24:24.476 04:20:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:24.476 04:20:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:24.476 04:20:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:24.476 04:20:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:24.476 04:20:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:24.476 04:20:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:24.476 04:20:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:24.476 04:20:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:24.476 04:20:26 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:24.476 04:20:26 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:24.477 04:20:26 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:24.477 04:20:26 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:24.477 04:20:26 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:24.477 04:20:26 -- host/timeout.sh@19 -- # nvmftestinit 00:24:24.477 04:20:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:24.477 04:20:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:24.477 04:20:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:24.477 04:20:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:24.477 04:20:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:24.477 04:20:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.477 04:20:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:24.477 04:20:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.477 04:20:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:24.477 04:20:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:24.477 04:20:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:24.477 04:20:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:24.477 04:20:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:24.477 04:20:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:24.477 04:20:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:24.477 04:20:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:24.477 04:20:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:24.477 04:20:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:24.477 04:20:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:24.477 04:20:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:24.477 04:20:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:24.477 04:20:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:24.477 04:20:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:24.477 04:20:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:24.477 04:20:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:24.477 04:20:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:24.477 04:20:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:24.477 04:20:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:24.477 Cannot find device "nvmf_tgt_br" 00:24:24.477 04:20:26 -- nvmf/common.sh@154 -- # true 00:24:24.477 04:20:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:24.477 Cannot find device "nvmf_tgt_br2" 00:24:24.477 04:20:26 -- nvmf/common.sh@155 -- # true 00:24:24.477 04:20:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:24.477 04:20:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:24.477 Cannot find device "nvmf_tgt_br" 00:24:24.477 04:20:26 -- nvmf/common.sh@157 -- # true 00:24:24.477 04:20:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:24.477 Cannot find device "nvmf_tgt_br2" 00:24:24.477 04:20:26 -- nvmf/common.sh@158 -- # true 00:24:24.477 04:20:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:24.477 04:20:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:24.736 04:20:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:24.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:24.736 04:20:26 -- nvmf/common.sh@161 -- # true 00:24:24.736 04:20:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:24.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:24.736 04:20:26 -- nvmf/common.sh@162 -- # true 00:24:24.736 04:20:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:24.736 04:20:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:24.736 04:20:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:24.736 04:20:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:24.736 04:20:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:24.736 04:20:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:24.736 04:20:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:24.736 04:20:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:24.736 04:20:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:24.736 04:20:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:24.736 04:20:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:24.736 04:20:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:24.736 04:20:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:24.736 04:20:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:24.736 04:20:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:24.736 04:20:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:24.736 04:20:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:24.736 04:20:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:24.736 04:20:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:24.736 04:20:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:24.736 04:20:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:24.736 04:20:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:24.736 04:20:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:24.736 04:20:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:24.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:24:24.736 00:24:24.736 --- 10.0.0.2 ping statistics --- 00:24:24.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.736 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:24:24.736 04:20:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:24.736 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:24.736 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:24:24.736 00:24:24.736 --- 10.0.0.3 ping statistics --- 00:24:24.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.736 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:24:24.736 04:20:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:24.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:24:24.736 00:24:24.736 --- 10.0.0.1 ping statistics --- 00:24:24.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.736 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:24:24.736 04:20:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.736 04:20:26 -- nvmf/common.sh@421 -- # return 0 00:24:24.736 04:20:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:24.736 04:20:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.736 04:20:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:24.736 04:20:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:24.736 04:20:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.736 04:20:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:24.736 04:20:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:24.736 04:20:26 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:24.736 04:20:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:24.736 04:20:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:24.736 04:20:26 -- common/autotest_common.sh@10 -- # set +x 00:24:24.736 04:20:26 -- nvmf/common.sh@469 -- # nvmfpid=100427 00:24:24.736 04:20:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:24.736 04:20:26 -- nvmf/common.sh@470 -- # waitforlisten 100427 00:24:24.736 04:20:26 -- common/autotest_common.sh@829 -- # '[' -z 100427 ']' 00:24:24.736 04:20:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.736 04:20:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:24.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.736 04:20:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.736 04:20:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:24.736 04:20:26 -- common/autotest_common.sh@10 -- # set +x 00:24:24.996 [2024-11-26 04:20:26.500059] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:24.996 [2024-11-26 04:20:26.500117] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.996 [2024-11-26 04:20:26.632424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:24.996 [2024-11-26 04:20:26.704224] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:24.996 [2024-11-26 04:20:26.704364] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.996 [2024-11-26 04:20:26.704376] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.996 [2024-11-26 04:20:26.704384] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.996 [2024-11-26 04:20:26.704555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.996 [2024-11-26 04:20:26.704572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.932 04:20:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:25.932 04:20:27 -- common/autotest_common.sh@862 -- # return 0 00:24:25.932 04:20:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:25.932 04:20:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:25.932 04:20:27 -- common/autotest_common.sh@10 -- # set +x 00:24:25.932 04:20:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.932 04:20:27 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:25.932 04:20:27 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:26.190 [2024-11-26 04:20:27.812252] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.190 04:20:27 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:26.449 Malloc0 00:24:26.449 04:20:28 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:26.708 04:20:28 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:26.967 04:20:28 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.226 [2024-11-26 04:20:28.770444] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:27.226 04:20:28 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:27.226 04:20:28 -- host/timeout.sh@32 -- # bdevperf_pid=100518 00:24:27.226 04:20:28 -- host/timeout.sh@34 -- # waitforlisten 100518 /var/tmp/bdevperf.sock 00:24:27.226 04:20:28 -- common/autotest_common.sh@829 -- # '[' -z 100518 ']' 00:24:27.226 04:20:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:27.226 04:20:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:27.226 04:20:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:27.226 04:20:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:27.226 04:20:28 -- common/autotest_common.sh@10 -- # set +x 00:24:27.226 [2024-11-26 04:20:28.826369] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:27.226 [2024-11-26 04:20:28.826484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100518 ] 00:24:27.226 [2024-11-26 04:20:28.957347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.485 [2024-11-26 04:20:29.029414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.052 04:20:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:28.052 04:20:29 -- common/autotest_common.sh@862 -- # return 0 00:24:28.052 04:20:29 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:28.310 04:20:29 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:28.568 NVMe0n1 00:24:28.568 04:20:30 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:28.568 04:20:30 -- host/timeout.sh@51 -- # rpc_pid=100570 00:24:28.568 04:20:30 -- host/timeout.sh@53 -- # sleep 1 00:24:28.827 Running I/O for 10 seconds... 00:24:29.767 04:20:31 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:29.767 [2024-11-26 04:20:31.494860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.494924] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.494946] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.494957] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.494968] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.494979] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.494990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495001] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495012] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495023] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495043] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495066] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495077] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495087] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495102] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495116] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495126] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495144] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495154] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495164] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495175] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495185] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495195] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495205] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495216] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495227] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495237] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495247] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495258] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495268] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495278] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495288] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495301] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495312] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495322] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495333] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495344] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495354] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495367] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495377] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495387] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.767 [2024-11-26 04:20:31.495397] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.768 [2024-11-26 04:20:31.495407] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.768 [2024-11-26 04:20:31.495418] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.768 [2024-11-26 04:20:31.495428] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.768 [2024-11-26 04:20:31.495438] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.768 [2024-11-26 04:20:31.495448] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.768 [2024-11-26 04:20:31.495458] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.768 [2024-11-26 04:20:31.495468] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.768 [2024-11-26 04:20:31.495478] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.768 [2024-11-26 04:20:31.495488] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.768 [2024-11-26 04:20:31.495498] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.768 [2024-11-26 04:20:31.495508] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.768 [2024-11-26 04:20:31.495519] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.768 [2024-11-26 04:20:31.495529] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.768 [2024-11-26 04:20:31.495539] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.768 [2024-11-26 04:20:31.495550] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.768 [2024-11-26 04:20:31.495560] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.768 [2024-11-26 04:20:31.495569] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.768 [2024-11-26 04:20:31.495579] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.768 [2024-11-26 04:20:31.495589] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.768 [2024-11-26 04:20:31.495599] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9490 is same with the state(5) to be set 00:24:29.768 [2024-11-26 04:20:31.495972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:122136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.768 [2024-11-26 04:20:31.496601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.768 [2024-11-26 04:20:31.496610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.496620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.496628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.496638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.496653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.496664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.496673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.496683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.496692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.496702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.496711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.496754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.496763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.496785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.496796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.496807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.496816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.496828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.496838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.496849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.496858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.496870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.496879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.496890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.496899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.496910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.769 [2024-11-26 04:20:31.496919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.496930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.496939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.496950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.496959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.496970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.496979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.496990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.769 [2024-11-26 04:20:31.496998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.497009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:122984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.497024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.497035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.769 [2024-11-26 04:20:31.497044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.497070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.769 [2024-11-26 04:20:31.497079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.497104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.497112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.497123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.769 [2024-11-26 04:20:31.497131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.497141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.769 [2024-11-26 04:20:31.497150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.497160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:123032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.497169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.497180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.769 [2024-11-26 04:20:31.497188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.497198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.769 [2024-11-26 04:20:31.497207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.497217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.497226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.497236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:123064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.497244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.497255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:123072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.497263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.497273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.769 [2024-11-26 04:20:31.497281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.497291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.769 [2024-11-26 04:20:31.497299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.497310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:123096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.497319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.497330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.769 [2024-11-26 04:20:31.497338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.497348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.497362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.497373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.497381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.497391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.497400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.497410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.497420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.497431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.497440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.497451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.769 [2024-11-26 04:20:31.497459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.769 [2024-11-26 04:20:31.497470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.497478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.497488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.497497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.497507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:123112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.497516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.497526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.497535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.497545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:123128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.497554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.497564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.497573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.497599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.770 [2024-11-26 04:20:31.497607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.497619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.770 [2024-11-26 04:20:31.497628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.497638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.770 [2024-11-26 04:20:31.497647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.497658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.770 [2024-11-26 04:20:31.497666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.497677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.770 [2024-11-26 04:20:31.497690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.497701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.497710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.497736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.497746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.497757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.770 [2024-11-26 04:20:31.497775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.497788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.497797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.497809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:123216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.497818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.497829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:123224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.497838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.497849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:123232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.497857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.497868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.497877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.497889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.770 [2024-11-26 04:20:31.497898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.497910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.497920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.497931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:123264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.497940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.497951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.770 [2024-11-26 04:20:31.497960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.497979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.770 [2024-11-26 04:20:31.497988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.498014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.770 [2024-11-26 04:20:31.498024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.498035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:123296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.498045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.498056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:123304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.498072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.498083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.770 [2024-11-26 04:20:31.498093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.498104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.498112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.498123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.498132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.498143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.498152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.498163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.498171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.498182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.498191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.498202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.498211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.498222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.498232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.498243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.498252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.498280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.498303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.498313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.498322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.498332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.770 [2024-11-26 04:20:31.498341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.498351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:123344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.770 [2024-11-26 04:20:31.498359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.770 [2024-11-26 04:20:31.498370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.771 [2024-11-26 04:20:31.498378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.771 [2024-11-26 04:20:31.498389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.771 [2024-11-26 04:20:31.498397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.771 [2024-11-26 04:20:31.498408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.771 [2024-11-26 04:20:31.498421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.771 [2024-11-26 04:20:31.498436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.771 [2024-11-26 04:20:31.498445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.771 [2024-11-26 04:20:31.498456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.771 [2024-11-26 04:20:31.498464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.771 [2024-11-26 04:20:31.498475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.771 [2024-11-26 04:20:31.498483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.771 [2024-11-26 04:20:31.498493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.771 [2024-11-26 04:20:31.498502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.771 [2024-11-26 04:20:31.498512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.771 [2024-11-26 04:20:31.498527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.771 [2024-11-26 04:20:31.498538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.771 [2024-11-26 04:20:31.498546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.771 [2024-11-26 04:20:31.498556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.771 [2024-11-26 04:20:31.498564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.771 [2024-11-26 04:20:31.498575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:123432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.771 [2024-11-26 04:20:31.498583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.771 [2024-11-26 04:20:31.498593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.771 [2024-11-26 04:20:31.498602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.771 [2024-11-26 04:20:31.498613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.771 [2024-11-26 04:20:31.498621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.771 [2024-11-26 04:20:31.498631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.771 [2024-11-26 04:20:31.498640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.771 [2024-11-26 04:20:31.498650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.771 [2024-11-26 04:20:31.498658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.771 [2024-11-26 04:20:31.498669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.771 [2024-11-26 04:20:31.498677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.771 [2024-11-26 04:20:31.498687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.771 [2024-11-26 04:20:31.498695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.771 [2024-11-26 04:20:31.498705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.771 [2024-11-26 04:20:31.498729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.771 [2024-11-26 04:20:31.498756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1316780 is same with the state(5) to be set 00:24:29.771 [2024-11-26 04:20:31.498781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.771 [2024-11-26 04:20:31.498796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.771 [2024-11-26 04:20:31.498806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122936 len:8 PRP1 0x0 PRP2 0x0 00:24:29.771 [2024-11-26 04:20:31.498815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.771 [2024-11-26 04:20:31.498868] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1316780 was disconnected and freed. reset controller. 00:24:29.771 [2024-11-26 04:20:31.498961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.771 [2024-11-26 04:20:31.498978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.771 [2024-11-26 04:20:31.498989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.771 [2024-11-26 04:20:31.498998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.771 [2024-11-26 04:20:31.499008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.771 [2024-11-26 04:20:31.499017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.771 [2024-11-26 04:20:31.499027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.771 [2024-11-26 04:20:31.499036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.771 [2024-11-26 04:20:31.499044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12918c0 is same with the state(5) to be set 00:24:29.771 [2024-11-26 04:20:31.499298] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.771 [2024-11-26 04:20:31.499330] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12918c0 (9): Bad file descriptor 00:24:29.771 [2024-11-26 04:20:31.499440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.771 [2024-11-26 04:20:31.499487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.771 [2024-11-26 04:20:31.499503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12918c0 with addr=10.0.0.2, port=4420 00:24:29.771 [2024-11-26 04:20:31.499513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12918c0 is same with the state(5) to be set 00:24:29.771 [2024-11-26 04:20:31.499530] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12918c0 (9): Bad file descriptor 00:24:29.771 [2024-11-26 04:20:31.499546] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.771 [2024-11-26 04:20:31.499555] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.771 [2024-11-26 04:20:31.499565] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.771 [2024-11-26 04:20:31.512269] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.771 [2024-11-26 04:20:31.512318] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.771 04:20:31 -- host/timeout.sh@56 -- # sleep 2 00:24:32.313 [2024-11-26 04:20:33.512401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.313 [2024-11-26 04:20:33.512500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.313 [2024-11-26 04:20:33.512516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12918c0 with addr=10.0.0.2, port=4420 00:24:32.313 [2024-11-26 04:20:33.512527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12918c0 is same with the state(5) to be set 00:24:32.313 [2024-11-26 04:20:33.512545] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12918c0 (9): Bad file descriptor 00:24:32.313 [2024-11-26 04:20:33.512561] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:32.313 [2024-11-26 04:20:33.512569] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:32.313 [2024-11-26 04:20:33.512577] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.314 [2024-11-26 04:20:33.512595] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.314 [2024-11-26 04:20:33.512604] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.314 04:20:33 -- host/timeout.sh@57 -- # get_controller 00:24:32.314 04:20:33 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:32.314 04:20:33 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:32.314 04:20:33 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:32.314 04:20:33 -- host/timeout.sh@58 -- # get_bdev 00:24:32.314 04:20:33 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:32.314 04:20:33 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:32.604 04:20:34 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:32.604 04:20:34 -- host/timeout.sh@61 -- # sleep 5 00:24:34.009 [2024-11-26 04:20:35.512720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.009 [2024-11-26 04:20:35.512808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.009 [2024-11-26 04:20:35.512824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12918c0 with addr=10.0.0.2, port=4420 00:24:34.009 [2024-11-26 04:20:35.512837] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12918c0 is same with the state(5) to be set 00:24:34.009 [2024-11-26 04:20:35.512859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12918c0 (9): Bad file descriptor 00:24:34.009 [2024-11-26 04:20:35.512876] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.009 [2024-11-26 04:20:35.512885] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.009 [2024-11-26 04:20:35.512895] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.009 [2024-11-26 04:20:35.512919] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.009 [2024-11-26 04:20:35.512929] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.911 [2024-11-26 04:20:37.512951] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.911 [2024-11-26 04:20:37.513001] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.911 [2024-11-26 04:20:37.513027] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.911 [2024-11-26 04:20:37.513035] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:35.911 [2024-11-26 04:20:37.513054] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.848 00:24:36.848 Latency(us) 00:24:36.848 [2024-11-26T04:20:38.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.848 [2024-11-26T04:20:38.616Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:36.848 Verification LBA range: start 0x0 length 0x4000 00:24:36.848 NVMe0n1 : 8.13 1882.11 7.35 15.74 0.00 67345.43 2234.18 7015926.69 00:24:36.848 [2024-11-26T04:20:38.616Z] =================================================================================================================== 00:24:36.848 [2024-11-26T04:20:38.616Z] Total : 1882.11 7.35 15.74 0.00 67345.43 2234.18 7015926.69 00:24:36.848 0 00:24:37.415 04:20:39 -- host/timeout.sh@62 -- # get_controller 00:24:37.415 04:20:39 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:37.415 04:20:39 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:37.674 04:20:39 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:37.674 04:20:39 -- host/timeout.sh@63 -- # get_bdev 00:24:37.674 04:20:39 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:37.674 04:20:39 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:37.940 04:20:39 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:37.940 04:20:39 -- host/timeout.sh@65 -- # wait 100570 00:24:37.940 04:20:39 -- host/timeout.sh@67 -- # killprocess 100518 00:24:37.940 04:20:39 -- common/autotest_common.sh@936 -- # '[' -z 100518 ']' 00:24:37.940 04:20:39 -- common/autotest_common.sh@940 -- # kill -0 100518 00:24:37.940 04:20:39 -- common/autotest_common.sh@941 -- # uname 00:24:37.940 04:20:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:37.940 04:20:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100518 00:24:37.940 killing process with pid 100518 00:24:37.940 Received shutdown signal, test time was about 9.226775 seconds 00:24:37.940 00:24:37.940 Latency(us) 00:24:37.940 [2024-11-26T04:20:39.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.940 [2024-11-26T04:20:39.708Z] =================================================================================================================== 00:24:37.940 [2024-11-26T04:20:39.708Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:37.940 04:20:39 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:37.940 04:20:39 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:37.940 04:20:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100518' 00:24:37.940 04:20:39 -- common/autotest_common.sh@955 -- # kill 100518 00:24:37.940 04:20:39 -- common/autotest_common.sh@960 -- # wait 100518 00:24:38.207 04:20:39 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.207 [2024-11-26 04:20:39.956451] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:38.466 04:20:39 -- host/timeout.sh@74 -- # bdevperf_pid=100723 00:24:38.466 04:20:39 -- host/timeout.sh@76 -- # waitforlisten 100723 /var/tmp/bdevperf.sock 00:24:38.466 04:20:39 -- common/autotest_common.sh@829 -- # '[' -z 100723 ']' 00:24:38.466 04:20:39 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:38.466 04:20:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:38.466 04:20:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:38.466 04:20:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:38.466 04:20:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:38.466 04:20:39 -- common/autotest_common.sh@10 -- # set +x 00:24:38.466 [2024-11-26 04:20:40.020236] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:38.466 [2024-11-26 04:20:40.020326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100723 ] 00:24:38.466 [2024-11-26 04:20:40.153919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.466 [2024-11-26 04:20:40.214484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:39.403 04:20:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:39.403 04:20:40 -- common/autotest_common.sh@862 -- # return 0 00:24:39.403 04:20:40 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:39.662 04:20:41 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:39.921 NVMe0n1 00:24:39.921 04:20:41 -- host/timeout.sh@84 -- # rpc_pid=100772 00:24:39.921 04:20:41 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:39.921 04:20:41 -- host/timeout.sh@86 -- # sleep 1 00:24:39.921 Running I/O for 10 seconds... 00:24:40.857 04:20:42 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:41.118 [2024-11-26 04:20:42.742136] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742192] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742204] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742213] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742223] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742231] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742238] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742246] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742254] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742273] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742283] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742299] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742307] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742330] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742337] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742344] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742366] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742374] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742381] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742389] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742396] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742403] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742411] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742418] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742425] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742435] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742442] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742464] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742471] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742487] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742509] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742516] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742530] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742539] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742546] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742554] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742561] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742568] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742575] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742582] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742589] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742596] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742603] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742611] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742618] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742624] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742631] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742638] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742646] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742653] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742661] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.742668] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114eca0 is same with the state(5) to be set 00:24:41.118 [2024-11-26 04:20:42.743216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.118 [2024-11-26 04:20:42.743257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.118 [2024-11-26 04:20:42.743278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.118 [2024-11-26 04:20:42.743289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.118 [2024-11-26 04:20:42.743315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.118 [2024-11-26 04:20:42.743339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.118 [2024-11-26 04:20:42.743349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.118 [2024-11-26 04:20:42.743357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.118 [2024-11-26 04:20:42.743367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.118 [2024-11-26 04:20:42.743376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.118 [2024-11-26 04:20:42.743387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.118 [2024-11-26 04:20:42.743395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.118 [2024-11-26 04:20:42.743405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.118 [2024-11-26 04:20:42.743413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.118 [2024-11-26 04:20:42.743423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.118 [2024-11-26 04:20:42.743431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.118 [2024-11-26 04:20:42.743441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.118 [2024-11-26 04:20:42.743449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.118 [2024-11-26 04:20:42.743458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.743985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.743994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.744005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.744014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.744025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.744034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.744044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.744053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.744063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.744072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.744099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.744122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.744146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.744154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.744164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.744172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.744182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.744190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.744199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.744208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.744218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.744227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.744237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.744245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.744255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.119 [2024-11-26 04:20:42.744264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.744274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.119 [2024-11-26 04:20:42.744282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.744292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-11-26 04:20:42.744305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.119 [2024-11-26 04:20:42.744315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-11-26 04:20:42.744323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-11-26 04:20:42.744342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-11-26 04:20:42.744359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-11-26 04:20:42.744378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-11-26 04:20:42.744395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-11-26 04:20:42.744413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-11-26 04:20:42.744431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-11-26 04:20:42.744448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-11-26 04:20:42.744466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-11-26 04:20:42.744483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-11-26 04:20:42.744501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-11-26 04:20:42.744518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-11-26 04:20:42.744536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-11-26 04:20:42.744556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-11-26 04:20:42.744574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-11-26 04:20:42.744597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-11-26 04:20:42.744615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-11-26 04:20:42.744633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-11-26 04:20:42.744651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-11-26 04:20:42.744669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-11-26 04:20:42.744686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-11-26 04:20:42.744704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-11-26 04:20:42.744738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-11-26 04:20:42.744757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-11-26 04:20:42.744785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-11-26 04:20:42.744803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-11-26 04:20:42.744822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-11-26 04:20:42.744840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-11-26 04:20:42.744858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-11-26 04:20:42.744878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-11-26 04:20:42.744898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-11-26 04:20:42.744921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-11-26 04:20:42.744939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-11-26 04:20:42.744957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-11-26 04:20:42.744976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.744986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-11-26 04:20:42.744994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.745004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-11-26 04:20:42.745012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.745023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-11-26 04:20:42.745031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.745041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-11-26 04:20:42.745050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.120 [2024-11-26 04:20:42.745060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-11-26 04:20:42.745068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-11-26 04:20:42.745285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-11-26 04:20:42.745322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-11-26 04:20:42.745376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-11-26 04:20:42.745429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-11-26 04:20:42.745464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-11-26 04:20:42.745482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-11-26 04:20:42.745820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-11-26 04:20:42.745829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2313660 is same with the state(5) to be set 00:24:41.121 [2024-11-26 04:20:42.745841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.121 [2024-11-26 04:20:42.745848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.121 [2024-11-26 04:20:42.745863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16616 len:8 PRP1 0x0 PRP2 0x0 00:24:41.122 [2024-11-26 04:20:42.745874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.122 [2024-11-26 04:20:42.745925] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2313660 was disconnected and freed. reset controller. 00:24:41.122 [2024-11-26 04:20:42.746052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.122 [2024-11-26 04:20:42.746084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.122 [2024-11-26 04:20:42.746094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.122 [2024-11-26 04:20:42.746102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.122 [2024-11-26 04:20:42.746111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.122 [2024-11-26 04:20:42.746120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.122 [2024-11-26 04:20:42.746129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.122 [2024-11-26 04:20:42.746137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.122 [2024-11-26 04:20:42.746145] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228e8c0 is same with the state(5) to be set 00:24:41.122 [2024-11-26 04:20:42.746364] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.122 [2024-11-26 04:20:42.746392] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228e8c0 (9): Bad file descriptor 00:24:41.122 [2024-11-26 04:20:42.746493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.122 [2024-11-26 04:20:42.746537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.122 [2024-11-26 04:20:42.746558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228e8c0 with addr=10.0.0.2, port=4420 00:24:41.122 [2024-11-26 04:20:42.746568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228e8c0 is same with the state(5) to be set 00:24:41.122 [2024-11-26 04:20:42.746584] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228e8c0 (9): Bad file descriptor 00:24:41.122 [2024-11-26 04:20:42.746600] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.122 [2024-11-26 04:20:42.746610] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.122 [2024-11-26 04:20:42.746620] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.122 [2024-11-26 04:20:42.758112] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.122 [2024-11-26 04:20:42.758164] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.122 04:20:42 -- host/timeout.sh@90 -- # sleep 1 00:24:42.058 [2024-11-26 04:20:43.758245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.058 [2024-11-26 04:20:43.758324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.058 [2024-11-26 04:20:43.758341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228e8c0 with addr=10.0.0.2, port=4420 00:24:42.058 [2024-11-26 04:20:43.758351] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228e8c0 is same with the state(5) to be set 00:24:42.058 [2024-11-26 04:20:43.758369] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228e8c0 (9): Bad file descriptor 00:24:42.058 [2024-11-26 04:20:43.758384] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.058 [2024-11-26 04:20:43.758393] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.058 [2024-11-26 04:20:43.758402] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.058 [2024-11-26 04:20:43.758429] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.058 [2024-11-26 04:20:43.758440] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.058 04:20:43 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:42.316 [2024-11-26 04:20:44.004451] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.316 04:20:44 -- host/timeout.sh@92 -- # wait 100772 00:24:43.252 [2024-11-26 04:20:44.775667] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:51.373 00:24:51.373 Latency(us) 00:24:51.373 [2024-11-26T04:20:53.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.373 [2024-11-26T04:20:53.141Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:51.373 Verification LBA range: start 0x0 length 0x4000 00:24:51.373 NVMe0n1 : 10.01 10918.60 42.65 0.00 0.00 11706.08 1161.77 3019898.88 00:24:51.373 [2024-11-26T04:20:53.141Z] =================================================================================================================== 00:24:51.373 [2024-11-26T04:20:53.141Z] Total : 10918.60 42.65 0.00 0.00 11706.08 1161.77 3019898.88 00:24:51.373 0 00:24:51.373 04:20:51 -- host/timeout.sh@97 -- # rpc_pid=100889 00:24:51.373 04:20:51 -- host/timeout.sh@98 -- # sleep 1 00:24:51.373 04:20:51 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:51.373 Running I/O for 10 seconds... 00:24:51.373 04:20:52 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:51.373 [2024-11-26 04:20:52.826365] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826433] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826445] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826452] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826459] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826467] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826474] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826481] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826488] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826495] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826503] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826509] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826517] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826524] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826530] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826537] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826544] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826550] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826565] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826572] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826579] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826585] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826592] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826598] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826605] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826612] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826618] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826625] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826631] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826638] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826647] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826654] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826661] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826669] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826676] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826684] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826691] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.826714] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaa110 is same with the state(5) to be set 00:24:51.373 [2024-11-26 04:20:52.827002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.373 [2024-11-26 04:20:52.827033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.373 [2024-11-26 04:20:52.827056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.373 [2024-11-26 04:20:52.827067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.373 [2024-11-26 04:20:52.827079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.373 [2024-11-26 04:20:52.827088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.373 [2024-11-26 04:20:52.827114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.373 [2024-11-26 04:20:52.827122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.373 [2024-11-26 04:20:52.827133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.373 [2024-11-26 04:20:52.827142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.373 [2024-11-26 04:20:52.827152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.373 [2024-11-26 04:20:52.827160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.373 [2024-11-26 04:20:52.827170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.373 [2024-11-26 04:20:52.827178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.373 [2024-11-26 04:20:52.827189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.373 [2024-11-26 04:20:52.827198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.373 [2024-11-26 04:20:52.827208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.373 [2024-11-26 04:20:52.827216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.373 [2024-11-26 04:20:52.827227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.373 [2024-11-26 04:20:52.827235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.373 [2024-11-26 04:20:52.827246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.374 [2024-11-26 04:20:52.827854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.374 [2024-11-26 04:20:52.827894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.827974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.827985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.374 [2024-11-26 04:20:52.827994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.828005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.374 [2024-11-26 04:20:52.828015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.374 [2024-11-26 04:20:52.828026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.374 [2024-11-26 04:20:52.828035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.375 [2024-11-26 04:20:52.828056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.375 [2024-11-26 04:20:52.828127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.375 [2024-11-26 04:20:52.828329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.375 [2024-11-26 04:20:52.828347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.375 [2024-11-26 04:20:52.828367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.375 [2024-11-26 04:20:52.828465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.375 [2024-11-26 04:20:52.828504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.375 [2024-11-26 04:20:52.828543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.375 [2024-11-26 04:20:52.828562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.375 [2024-11-26 04:20:52.828581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.375 [2024-11-26 04:20:52.828599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.375 [2024-11-26 04:20:52.828686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.375 [2024-11-26 04:20:52.828704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.375 [2024-11-26 04:20:52.828908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.375 [2024-11-26 04:20:52.828917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.828929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.376 [2024-11-26 04:20:52.828937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.828949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.376 [2024-11-26 04:20:52.828958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.828968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.376 [2024-11-26 04:20:52.828977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.828988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.828997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.376 [2024-11-26 04:20:52.829038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.376 [2024-11-26 04:20:52.829085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.376 [2024-11-26 04:20:52.829134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.376 [2024-11-26 04:20:52.829171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.376 [2024-11-26 04:20:52.829208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.376 [2024-11-26 04:20:52.829269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.376 [2024-11-26 04:20:52.829306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.376 [2024-11-26 04:20:52.829341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.376 [2024-11-26 04:20:52.829692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22df1d0 is same with the state(5) to be set 00:24:51.376 [2024-11-26 04:20:52.829713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:51.376 [2024-11-26 04:20:52.829736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:51.376 [2024-11-26 04:20:52.829751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128904 len:8 PRP1 0x0 PRP2 0x0 00:24:51.376 [2024-11-26 04:20:52.829761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.376 [2024-11-26 04:20:52.829825] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22df1d0 was disconnected and freed. reset controller. 00:24:51.376 [2024-11-26 04:20:52.830062] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.377 [2024-11-26 04:20:52.830152] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228e8c0 (9): Bad file descriptor 00:24:51.377 [2024-11-26 04:20:52.830271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.377 [2024-11-26 04:20:52.830333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.377 [2024-11-26 04:20:52.830350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228e8c0 with addr=10.0.0.2, port=4420 00:24:51.377 [2024-11-26 04:20:52.830360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228e8c0 is same with the state(5) to be set 00:24:51.377 [2024-11-26 04:20:52.830377] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228e8c0 (9): Bad file descriptor 00:24:51.377 [2024-11-26 04:20:52.830405] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.377 [2024-11-26 04:20:52.830414] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.377 [2024-11-26 04:20:52.830424] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.377 [2024-11-26 04:20:52.830452] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.377 [2024-11-26 04:20:52.830471] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.377 04:20:52 -- host/timeout.sh@101 -- # sleep 3 00:24:52.313 [2024-11-26 04:20:53.830545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.313 [2024-11-26 04:20:53.830630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.313 [2024-11-26 04:20:53.830646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228e8c0 with addr=10.0.0.2, port=4420 00:24:52.313 [2024-11-26 04:20:53.830656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228e8c0 is same with the state(5) to be set 00:24:52.313 [2024-11-26 04:20:53.830676] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228e8c0 (9): Bad file descriptor 00:24:52.313 [2024-11-26 04:20:53.830691] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.313 [2024-11-26 04:20:53.830700] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.313 [2024-11-26 04:20:53.830708] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.313 [2024-11-26 04:20:53.830741] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.313 [2024-11-26 04:20:53.830752] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.249 [2024-11-26 04:20:54.830813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.249 [2024-11-26 04:20:54.830893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.249 [2024-11-26 04:20:54.830909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228e8c0 with addr=10.0.0.2, port=4420 00:24:53.249 [2024-11-26 04:20:54.830919] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228e8c0 is same with the state(5) to be set 00:24:53.249 [2024-11-26 04:20:54.830935] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228e8c0 (9): Bad file descriptor 00:24:53.249 [2024-11-26 04:20:54.830949] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.249 [2024-11-26 04:20:54.830958] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.249 [2024-11-26 04:20:54.830966] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.249 [2024-11-26 04:20:54.830983] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.249 [2024-11-26 04:20:54.830993] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.185 [2024-11-26 04:20:55.832525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.185 [2024-11-26 04:20:55.832605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.185 [2024-11-26 04:20:55.832621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228e8c0 with addr=10.0.0.2, port=4420 00:24:54.185 [2024-11-26 04:20:55.832631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228e8c0 is same with the state(5) to be set 00:24:54.185 [2024-11-26 04:20:55.832782] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228e8c0 (9): Bad file descriptor 00:24:54.185 [2024-11-26 04:20:55.832973] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.185 [2024-11-26 04:20:55.832987] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.185 [2024-11-26 04:20:55.832996] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.185 [2024-11-26 04:20:55.835070] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.185 [2024-11-26 04:20:55.835115] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.185 04:20:55 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:54.444 [2024-11-26 04:20:56.120925] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.444 04:20:56 -- host/timeout.sh@103 -- # wait 100889 00:24:55.381 [2024-11-26 04:20:56.856119] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:00.650 00:25:00.650 Latency(us) 00:25:00.650 [2024-11-26T04:21:02.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.651 [2024-11-26T04:21:02.419Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:00.651 Verification LBA range: start 0x0 length 0x4000 00:25:00.651 NVMe0n1 : 10.01 8846.10 34.56 7022.72 0.00 8053.12 811.75 3019898.88 00:25:00.651 [2024-11-26T04:21:02.419Z] =================================================================================================================== 00:25:00.651 [2024-11-26T04:21:02.419Z] Total : 8846.10 34.56 7022.72 0.00 8053.12 0.00 3019898.88 00:25:00.651 0 00:25:00.651 04:21:01 -- host/timeout.sh@105 -- # killprocess 100723 00:25:00.651 04:21:01 -- common/autotest_common.sh@936 -- # '[' -z 100723 ']' 00:25:00.651 04:21:01 -- common/autotest_common.sh@940 -- # kill -0 100723 00:25:00.651 04:21:01 -- common/autotest_common.sh@941 -- # uname 00:25:00.651 04:21:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:00.651 04:21:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100723 00:25:00.651 killing process with pid 100723 00:25:00.651 Received shutdown signal, test time was about 10.000000 seconds 00:25:00.651 00:25:00.651 Latency(us) 00:25:00.651 [2024-11-26T04:21:02.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.651 [2024-11-26T04:21:02.419Z] =================================================================================================================== 00:25:00.651 [2024-11-26T04:21:02.419Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:00.651 04:21:01 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:00.651 04:21:01 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:00.651 04:21:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100723' 00:25:00.651 04:21:01 -- common/autotest_common.sh@955 -- # kill 100723 00:25:00.651 04:21:01 -- common/autotest_common.sh@960 -- # wait 100723 00:25:00.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:00.651 04:21:01 -- host/timeout.sh@110 -- # bdevperf_pid=101015 00:25:00.651 04:21:01 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:25:00.651 04:21:01 -- host/timeout.sh@112 -- # waitforlisten 101015 /var/tmp/bdevperf.sock 00:25:00.651 04:21:01 -- common/autotest_common.sh@829 -- # '[' -z 101015 ']' 00:25:00.651 04:21:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:00.651 04:21:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:00.651 04:21:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:00.651 04:21:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:00.651 04:21:01 -- common/autotest_common.sh@10 -- # set +x 00:25:00.651 [2024-11-26 04:21:02.008276] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:00.651 [2024-11-26 04:21:02.008586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101015 ] 00:25:00.651 [2024-11-26 04:21:02.143807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.651 [2024-11-26 04:21:02.219718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:01.219 04:21:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:01.219 04:21:02 -- common/autotest_common.sh@862 -- # return 0 00:25:01.219 04:21:02 -- host/timeout.sh@116 -- # dtrace_pid=101043 00:25:01.219 04:21:02 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 101015 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:25:01.219 04:21:02 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:25:01.478 04:21:03 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:01.737 NVMe0n1 00:25:01.995 04:21:03 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:01.995 04:21:03 -- host/timeout.sh@124 -- # rpc_pid=101091 00:25:01.995 04:21:03 -- host/timeout.sh@125 -- # sleep 1 00:25:01.995 Running I/O for 10 seconds... 00:25:02.931 04:21:04 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:03.192 [2024-11-26 04:21:04.697595] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.192 [2024-11-26 04:21:04.697652] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.192 [2024-11-26 04:21:04.697673] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.192 [2024-11-26 04:21:04.697680] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.192 [2024-11-26 04:21:04.697687] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.192 [2024-11-26 04:21:04.697694] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.192 [2024-11-26 04:21:04.697701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.192 [2024-11-26 04:21:04.697717] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697734] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697740] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697748] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697755] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697762] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697769] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697776] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697783] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697790] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697803] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697810] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697817] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697824] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697831] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697838] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697845] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697852] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697859] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697866] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697873] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697879] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697885] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697891] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697898] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697905] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697912] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697918] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697924] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697940] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697947] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697962] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697968] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697975] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697982] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.697988] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.698054] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.698063] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.698087] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.698103] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.698110] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.698118] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.698126] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.698133] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.698141] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.698149] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.698157] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.698164] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.698171] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.698178] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.698186] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.698193] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.698200] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.698207] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadba0 is same with the state(5) to be set 00:25:03.193 [2024-11-26 04:21:04.698585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.193 [2024-11-26 04:21:04.698624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.193 [2024-11-26 04:21:04.698645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.193 [2024-11-26 04:21:04.698656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.193 [2024-11-26 04:21:04.698667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.193 [2024-11-26 04:21:04.698677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.193 [2024-11-26 04:21:04.698688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.193 [2024-11-26 04:21:04.698697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.193 [2024-11-26 04:21:04.698754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.193 [2024-11-26 04:21:04.698766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.193 [2024-11-26 04:21:04.698778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.193 [2024-11-26 04:21:04.698789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.193 [2024-11-26 04:21:04.698800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.193 [2024-11-26 04:21:04.698809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.193 [2024-11-26 04:21:04.698821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.193 [2024-11-26 04:21:04.698830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.193 [2024-11-26 04:21:04.698842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:29736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.193 [2024-11-26 04:21:04.698851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.193 [2024-11-26 04:21:04.698863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.193 [2024-11-26 04:21:04.698872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.193 [2024-11-26 04:21:04.698884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.193 [2024-11-26 04:21:04.698900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.193 [2024-11-26 04:21:04.698912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.193 [2024-11-26 04:21:04.698920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.193 [2024-11-26 04:21:04.698932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.193 [2024-11-26 04:21:04.698941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.193 [2024-11-26 04:21:04.698953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.193 [2024-11-26 04:21:04.698962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.698973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.698982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.698994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:27488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:68504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.194 [2024-11-26 04:21:04.699779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:118096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.194 [2024-11-26 04:21:04.699790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.699802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.699811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.699822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.699831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.699841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.699850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.699861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.699870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.699881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.699890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.699900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.699910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.699920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.699930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.699941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.699950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.699962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.699970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.699981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.699991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:116816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:53104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:50656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:119264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.195 [2024-11-26 04:21:04.700575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.195 [2024-11-26 04:21:04.700585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.700594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.700604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.700612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.700623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.700632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.700642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.700651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.700662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.700677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.700688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.700697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.700708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.700717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.700737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.700747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.700757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.700766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.700777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.700785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.700796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.700804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.700815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.700823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.700834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.700843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.700853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.700862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.700873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.700881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.700892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.700901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.700911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.700920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.700931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.700939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.700950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.700959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.700969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:85808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.700984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.700995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.701010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.701021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.701030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.701041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.701050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.701060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.701069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.701079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.701088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.701098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.701107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.701117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.701127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.701137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.701146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.701156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.701165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.701175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.701184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.701194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.701202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.701213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:27696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.701222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.701232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.701241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.701251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.701259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.701270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.196 [2024-11-26 04:21:04.701278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.701288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97b780 is same with the state(5) to be set 00:25:03.196 [2024-11-26 04:21:04.701304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.196 [2024-11-26 04:21:04.701312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.196 [2024-11-26 04:21:04.701324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17232 len:8 PRP1 0x0 PRP2 0x0 00:25:03.196 [2024-11-26 04:21:04.701333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.196 [2024-11-26 04:21:04.701382] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x97b780 was disconnected and freed. reset controller. 00:25:03.196 [2024-11-26 04:21:04.701637] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.196 [2024-11-26 04:21:04.701753] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f68c0 (9): Bad file descriptor 00:25:03.196 [2024-11-26 04:21:04.701871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.196 [2024-11-26 04:21:04.701920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.196 [2024-11-26 04:21:04.701936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f68c0 with addr=10.0.0.2, port=4420 00:25:03.197 [2024-11-26 04:21:04.701946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f68c0 is same with the state(5) to be set 00:25:03.197 [2024-11-26 04:21:04.701965] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f68c0 (9): Bad file descriptor 00:25:03.197 [2024-11-26 04:21:04.701980] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.197 [2024-11-26 04:21:04.701989] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.197 [2024-11-26 04:21:04.702019] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.197 [2024-11-26 04:21:04.702043] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.197 [2024-11-26 04:21:04.702054] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.197 04:21:04 -- host/timeout.sh@128 -- # wait 101091 00:25:05.104 [2024-11-26 04:21:06.702142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.104 [2024-11-26 04:21:06.702246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.105 [2024-11-26 04:21:06.702264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f68c0 with addr=10.0.0.2, port=4420 00:25:05.105 [2024-11-26 04:21:06.702274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f68c0 is same with the state(5) to be set 00:25:05.105 [2024-11-26 04:21:06.702302] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f68c0 (9): Bad file descriptor 00:25:05.105 [2024-11-26 04:21:06.702319] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:05.105 [2024-11-26 04:21:06.702328] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:05.105 [2024-11-26 04:21:06.702336] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:05.105 [2024-11-26 04:21:06.702354] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:05.105 [2024-11-26 04:21:06.702364] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:07.008 [2024-11-26 04:21:08.702457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.008 [2024-11-26 04:21:08.702564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.008 [2024-11-26 04:21:08.702583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f68c0 with addr=10.0.0.2, port=4420 00:25:07.008 [2024-11-26 04:21:08.702593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f68c0 is same with the state(5) to be set 00:25:07.008 [2024-11-26 04:21:08.702612] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f68c0 (9): Bad file descriptor 00:25:07.008 [2024-11-26 04:21:08.702628] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:07.008 [2024-11-26 04:21:08.702637] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:07.008 [2024-11-26 04:21:08.702646] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:07.008 [2024-11-26 04:21:08.702665] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.008 [2024-11-26 04:21:08.702676] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.541 [2024-11-26 04:21:10.702746] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.541 [2024-11-26 04:21:10.702823] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.541 [2024-11-26 04:21:10.702850] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.541 [2024-11-26 04:21:10.702860] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:09.541 [2024-11-26 04:21:10.702885] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.110 00:25:10.110 Latency(us) 00:25:10.110 [2024-11-26T04:21:11.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.110 [2024-11-26T04:21:11.878Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:25:10.110 NVMe0n1 : 8.12 3074.44 12.01 15.76 0.00 41363.81 2576.76 7015926.69 00:25:10.110 [2024-11-26T04:21:11.878Z] =================================================================================================================== 00:25:10.110 [2024-11-26T04:21:11.878Z] Total : 3074.44 12.01 15.76 0.00 41363.81 2576.76 7015926.69 00:25:10.110 0 00:25:10.110 04:21:11 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:10.110 Attaching 5 probes... 00:25:10.110 1216.701496: reset bdev controller NVMe0 00:25:10.110 1216.883037: reconnect bdev controller NVMe0 00:25:10.110 3217.149049: reconnect delay bdev controller NVMe0 00:25:10.110 3217.163965: reconnect bdev controller NVMe0 00:25:10.110 5217.454344: reconnect delay bdev controller NVMe0 00:25:10.110 5217.468116: reconnect bdev controller NVMe0 00:25:10.110 7217.773062: reconnect delay bdev controller NVMe0 00:25:10.110 7217.815032: reconnect bdev controller NVMe0 00:25:10.110 04:21:11 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:25:10.110 04:21:11 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:25:10.110 04:21:11 -- host/timeout.sh@136 -- # kill 101043 00:25:10.110 04:21:11 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:10.110 04:21:11 -- host/timeout.sh@139 -- # killprocess 101015 00:25:10.110 04:21:11 -- common/autotest_common.sh@936 -- # '[' -z 101015 ']' 00:25:10.110 04:21:11 -- common/autotest_common.sh@940 -- # kill -0 101015 00:25:10.110 04:21:11 -- common/autotest_common.sh@941 -- # uname 00:25:10.110 04:21:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:10.110 04:21:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101015 00:25:10.110 killing process with pid 101015 00:25:10.110 Received shutdown signal, test time was about 8.193899 seconds 00:25:10.110 00:25:10.110 Latency(us) 00:25:10.110 [2024-11-26T04:21:11.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.110 [2024-11-26T04:21:11.878Z] =================================================================================================================== 00:25:10.110 [2024-11-26T04:21:11.878Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:10.110 04:21:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:10.110 04:21:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:10.110 04:21:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101015' 00:25:10.110 04:21:11 -- common/autotest_common.sh@955 -- # kill 101015 00:25:10.110 04:21:11 -- common/autotest_common.sh@960 -- # wait 101015 00:25:10.370 04:21:11 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:10.629 04:21:12 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:25:10.629 04:21:12 -- host/timeout.sh@145 -- # nvmftestfini 00:25:10.629 04:21:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:10.629 04:21:12 -- nvmf/common.sh@116 -- # sync 00:25:10.629 04:21:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:10.629 04:21:12 -- nvmf/common.sh@119 -- # set +e 00:25:10.629 04:21:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:10.629 04:21:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:10.629 rmmod nvme_tcp 00:25:10.629 rmmod nvme_fabrics 00:25:10.629 rmmod nvme_keyring 00:25:10.629 04:21:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:10.629 04:21:12 -- nvmf/common.sh@123 -- # set -e 00:25:10.629 04:21:12 -- nvmf/common.sh@124 -- # return 0 00:25:10.629 04:21:12 -- nvmf/common.sh@477 -- # '[' -n 100427 ']' 00:25:10.629 04:21:12 -- nvmf/common.sh@478 -- # killprocess 100427 00:25:10.629 04:21:12 -- common/autotest_common.sh@936 -- # '[' -z 100427 ']' 00:25:10.629 04:21:12 -- common/autotest_common.sh@940 -- # kill -0 100427 00:25:10.629 04:21:12 -- common/autotest_common.sh@941 -- # uname 00:25:10.629 04:21:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:10.629 04:21:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100427 00:25:10.629 killing process with pid 100427 00:25:10.629 04:21:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:10.629 04:21:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:10.629 04:21:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100427' 00:25:10.629 04:21:12 -- common/autotest_common.sh@955 -- # kill 100427 00:25:10.629 04:21:12 -- common/autotest_common.sh@960 -- # wait 100427 00:25:10.905 04:21:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:10.905 04:21:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:10.905 04:21:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:10.905 04:21:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:10.905 04:21:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:10.905 04:21:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.905 04:21:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:10.905 04:21:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.905 04:21:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:10.905 00:25:10.905 real 0m46.730s 00:25:10.905 user 2m16.615s 00:25:10.905 sys 0m5.184s 00:25:10.905 04:21:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:10.905 04:21:12 -- common/autotest_common.sh@10 -- # set +x 00:25:10.905 ************************************ 00:25:10.905 END TEST nvmf_timeout 00:25:10.905 ************************************ 00:25:11.190 04:21:12 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:25:11.190 04:21:12 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:25:11.190 04:21:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:11.190 04:21:12 -- common/autotest_common.sh@10 -- # set +x 00:25:11.190 04:21:12 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:25:11.190 00:25:11.190 real 17m29.432s 00:25:11.190 user 55m38.405s 00:25:11.190 sys 3m42.178s 00:25:11.190 04:21:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:11.190 04:21:12 -- common/autotest_common.sh@10 -- # set +x 00:25:11.190 ************************************ 00:25:11.190 END TEST nvmf_tcp 00:25:11.190 ************************************ 00:25:11.190 04:21:12 -- spdk/autotest.sh@283 -- # [[ 0 -eq 0 ]] 00:25:11.190 04:21:12 -- spdk/autotest.sh@284 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:11.190 04:21:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:11.190 04:21:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:11.190 04:21:12 -- common/autotest_common.sh@10 -- # set +x 00:25:11.190 ************************************ 00:25:11.190 START TEST spdkcli_nvmf_tcp 00:25:11.190 ************************************ 00:25:11.190 04:21:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:11.190 * Looking for test storage... 00:25:11.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:11.190 04:21:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:11.190 04:21:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:11.190 04:21:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:11.462 04:21:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:11.462 04:21:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:11.462 04:21:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:11.462 04:21:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:11.462 04:21:12 -- scripts/common.sh@335 -- # IFS=.-: 00:25:11.462 04:21:12 -- scripts/common.sh@335 -- # read -ra ver1 00:25:11.462 04:21:12 -- scripts/common.sh@336 -- # IFS=.-: 00:25:11.462 04:21:12 -- scripts/common.sh@336 -- # read -ra ver2 00:25:11.462 04:21:12 -- scripts/common.sh@337 -- # local 'op=<' 00:25:11.462 04:21:12 -- scripts/common.sh@339 -- # ver1_l=2 00:25:11.462 04:21:12 -- scripts/common.sh@340 -- # ver2_l=1 00:25:11.462 04:21:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:11.462 04:21:12 -- scripts/common.sh@343 -- # case "$op" in 00:25:11.462 04:21:12 -- scripts/common.sh@344 -- # : 1 00:25:11.462 04:21:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:11.462 04:21:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:11.462 04:21:12 -- scripts/common.sh@364 -- # decimal 1 00:25:11.462 04:21:12 -- scripts/common.sh@352 -- # local d=1 00:25:11.462 04:21:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:11.462 04:21:12 -- scripts/common.sh@354 -- # echo 1 00:25:11.462 04:21:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:11.462 04:21:12 -- scripts/common.sh@365 -- # decimal 2 00:25:11.462 04:21:12 -- scripts/common.sh@352 -- # local d=2 00:25:11.462 04:21:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:11.462 04:21:12 -- scripts/common.sh@354 -- # echo 2 00:25:11.462 04:21:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:11.462 04:21:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:11.462 04:21:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:11.462 04:21:12 -- scripts/common.sh@367 -- # return 0 00:25:11.462 04:21:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:11.462 04:21:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:11.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.462 --rc genhtml_branch_coverage=1 00:25:11.462 --rc genhtml_function_coverage=1 00:25:11.462 --rc genhtml_legend=1 00:25:11.462 --rc geninfo_all_blocks=1 00:25:11.462 --rc geninfo_unexecuted_blocks=1 00:25:11.462 00:25:11.462 ' 00:25:11.462 04:21:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:11.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.462 --rc genhtml_branch_coverage=1 00:25:11.462 --rc genhtml_function_coverage=1 00:25:11.462 --rc genhtml_legend=1 00:25:11.462 --rc geninfo_all_blocks=1 00:25:11.462 --rc geninfo_unexecuted_blocks=1 00:25:11.462 00:25:11.462 ' 00:25:11.462 04:21:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:11.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.462 --rc genhtml_branch_coverage=1 00:25:11.462 --rc genhtml_function_coverage=1 00:25:11.462 --rc genhtml_legend=1 00:25:11.462 --rc geninfo_all_blocks=1 00:25:11.462 --rc geninfo_unexecuted_blocks=1 00:25:11.462 00:25:11.462 ' 00:25:11.462 04:21:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:11.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.462 --rc genhtml_branch_coverage=1 00:25:11.462 --rc genhtml_function_coverage=1 00:25:11.462 --rc genhtml_legend=1 00:25:11.462 --rc geninfo_all_blocks=1 00:25:11.462 --rc geninfo_unexecuted_blocks=1 00:25:11.462 00:25:11.462 ' 00:25:11.462 04:21:12 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:11.462 04:21:12 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:11.462 04:21:12 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:11.462 04:21:12 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:11.462 04:21:12 -- nvmf/common.sh@7 -- # uname -s 00:25:11.462 04:21:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.462 04:21:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.462 04:21:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.462 04:21:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.462 04:21:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.462 04:21:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.462 04:21:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.462 04:21:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.462 04:21:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.462 04:21:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.462 04:21:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:25:11.462 04:21:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:25:11.462 04:21:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.462 04:21:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.462 04:21:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:11.462 04:21:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:11.462 04:21:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.462 04:21:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.462 04:21:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.463 04:21:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.463 04:21:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.463 04:21:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.463 04:21:12 -- paths/export.sh@5 -- # export PATH 00:25:11.463 04:21:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.463 04:21:12 -- nvmf/common.sh@46 -- # : 0 00:25:11.463 04:21:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:11.463 04:21:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:11.463 04:21:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:11.463 04:21:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.463 04:21:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.463 04:21:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:11.463 04:21:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:11.463 04:21:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:11.463 04:21:12 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:11.463 04:21:13 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:11.463 04:21:13 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:11.463 04:21:13 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:11.463 04:21:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:11.463 04:21:13 -- common/autotest_common.sh@10 -- # set +x 00:25:11.463 04:21:13 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:11.463 04:21:13 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=101326 00:25:11.463 04:21:13 -- spdkcli/common.sh@34 -- # waitforlisten 101326 00:25:11.463 04:21:13 -- common/autotest_common.sh@829 -- # '[' -z 101326 ']' 00:25:11.463 04:21:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.463 04:21:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:11.463 04:21:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.463 04:21:13 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:11.463 04:21:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:11.463 04:21:13 -- common/autotest_common.sh@10 -- # set +x 00:25:11.463 [2024-11-26 04:21:13.066982] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:11.463 [2024-11-26 04:21:13.067736] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101326 ] 00:25:11.463 [2024-11-26 04:21:13.210193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:11.722 [2024-11-26 04:21:13.298526] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:11.722 [2024-11-26 04:21:13.299183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.722 [2024-11-26 04:21:13.299204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.659 04:21:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:12.659 04:21:14 -- common/autotest_common.sh@862 -- # return 0 00:25:12.659 04:21:14 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:12.659 04:21:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:12.659 04:21:14 -- common/autotest_common.sh@10 -- # set +x 00:25:12.659 04:21:14 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:12.659 04:21:14 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:12.659 04:21:14 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:12.659 04:21:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:12.659 04:21:14 -- common/autotest_common.sh@10 -- # set +x 00:25:12.659 04:21:14 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:12.659 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:12.659 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:12.659 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:12.659 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:12.659 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:12.659 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:12.659 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:12.659 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:12.659 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:12.659 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:12.659 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:12.659 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:12.659 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:12.659 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:12.659 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:12.659 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:12.659 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:12.659 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:12.659 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:12.659 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:12.659 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:12.659 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:12.659 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:12.659 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:12.659 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:12.659 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:12.659 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:12.659 ' 00:25:12.918 [2024-11-26 04:21:14.573673] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:15.453 [2024-11-26 04:21:16.839850] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:16.389 [2024-11-26 04:21:18.129472] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:18.922 [2024-11-26 04:21:20.516593] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:20.827 [2024-11-26 04:21:22.575125] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:22.749 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:22.749 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:22.749 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:22.749 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:22.749 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:22.749 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:22.749 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:22.749 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:22.749 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:22.749 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:22.749 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:22.749 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:22.749 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:22.749 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:22.749 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:22.749 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:22.749 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:22.749 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:22.749 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:22.749 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:22.749 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:22.749 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:22.749 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:22.749 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:22.749 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:22.749 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:22.749 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:22.749 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:22.750 04:21:24 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:22.750 04:21:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:22.750 04:21:24 -- common/autotest_common.sh@10 -- # set +x 00:25:22.750 04:21:24 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:22.750 04:21:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:22.750 04:21:24 -- common/autotest_common.sh@10 -- # set +x 00:25:22.750 04:21:24 -- spdkcli/nvmf.sh@69 -- # check_match 00:25:22.750 04:21:24 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:25:23.007 04:21:24 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:23.264 04:21:24 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:23.264 04:21:24 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:23.264 04:21:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:23.264 04:21:24 -- common/autotest_common.sh@10 -- # set +x 00:25:23.264 04:21:24 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:23.264 04:21:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:23.264 04:21:24 -- common/autotest_common.sh@10 -- # set +x 00:25:23.264 04:21:24 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:23.264 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:23.264 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:23.264 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:23.264 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:23.264 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:23.264 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:23.264 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:23.264 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:23.264 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:23.264 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:23.264 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:23.264 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:23.264 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:23.264 ' 00:25:28.530 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:28.530 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:28.530 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:28.530 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:28.530 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:28.530 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:28.530 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:28.530 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:28.530 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:28.530 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:28.530 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:28.530 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:28.530 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:28.530 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:28.530 04:21:30 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:28.530 04:21:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:28.530 04:21:30 -- common/autotest_common.sh@10 -- # set +x 00:25:28.789 04:21:30 -- spdkcli/nvmf.sh@90 -- # killprocess 101326 00:25:28.789 04:21:30 -- common/autotest_common.sh@936 -- # '[' -z 101326 ']' 00:25:28.789 04:21:30 -- common/autotest_common.sh@940 -- # kill -0 101326 00:25:28.789 04:21:30 -- common/autotest_common.sh@941 -- # uname 00:25:28.789 04:21:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:28.789 04:21:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101326 00:25:28.789 04:21:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:28.789 04:21:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:28.789 04:21:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101326' 00:25:28.789 killing process with pid 101326 00:25:28.789 04:21:30 -- common/autotest_common.sh@955 -- # kill 101326 00:25:28.789 [2024-11-26 04:21:30.353687] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:28.789 04:21:30 -- common/autotest_common.sh@960 -- # wait 101326 00:25:29.048 04:21:30 -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:29.048 04:21:30 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:29.048 04:21:30 -- spdkcli/common.sh@13 -- # '[' -n 101326 ']' 00:25:29.048 04:21:30 -- spdkcli/common.sh@14 -- # killprocess 101326 00:25:29.048 04:21:30 -- common/autotest_common.sh@936 -- # '[' -z 101326 ']' 00:25:29.048 04:21:30 -- common/autotest_common.sh@940 -- # kill -0 101326 00:25:29.048 Process with pid 101326 is not found 00:25:29.048 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (101326) - No such process 00:25:29.048 04:21:30 -- common/autotest_common.sh@963 -- # echo 'Process with pid 101326 is not found' 00:25:29.048 04:21:30 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:29.048 04:21:30 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:29.048 04:21:30 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:29.048 ************************************ 00:25:29.048 END TEST spdkcli_nvmf_tcp 00:25:29.048 ************************************ 00:25:29.048 00:25:29.048 real 0m17.821s 00:25:29.048 user 0m38.620s 00:25:29.048 sys 0m0.907s 00:25:29.048 04:21:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:29.048 04:21:30 -- common/autotest_common.sh@10 -- # set +x 00:25:29.048 04:21:30 -- spdk/autotest.sh@285 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:29.048 04:21:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:29.048 04:21:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:29.048 04:21:30 -- common/autotest_common.sh@10 -- # set +x 00:25:29.048 ************************************ 00:25:29.048 START TEST nvmf_identify_passthru 00:25:29.048 ************************************ 00:25:29.048 04:21:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:29.048 * Looking for test storage... 00:25:29.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:29.048 04:21:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:29.048 04:21:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:29.048 04:21:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:29.308 04:21:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:29.308 04:21:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:29.308 04:21:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:29.308 04:21:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:29.308 04:21:30 -- scripts/common.sh@335 -- # IFS=.-: 00:25:29.308 04:21:30 -- scripts/common.sh@335 -- # read -ra ver1 00:25:29.308 04:21:30 -- scripts/common.sh@336 -- # IFS=.-: 00:25:29.308 04:21:30 -- scripts/common.sh@336 -- # read -ra ver2 00:25:29.308 04:21:30 -- scripts/common.sh@337 -- # local 'op=<' 00:25:29.308 04:21:30 -- scripts/common.sh@339 -- # ver1_l=2 00:25:29.308 04:21:30 -- scripts/common.sh@340 -- # ver2_l=1 00:25:29.308 04:21:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:29.308 04:21:30 -- scripts/common.sh@343 -- # case "$op" in 00:25:29.308 04:21:30 -- scripts/common.sh@344 -- # : 1 00:25:29.308 04:21:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:29.308 04:21:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:29.308 04:21:30 -- scripts/common.sh@364 -- # decimal 1 00:25:29.308 04:21:30 -- scripts/common.sh@352 -- # local d=1 00:25:29.308 04:21:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:29.308 04:21:30 -- scripts/common.sh@354 -- # echo 1 00:25:29.308 04:21:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:29.308 04:21:30 -- scripts/common.sh@365 -- # decimal 2 00:25:29.308 04:21:30 -- scripts/common.sh@352 -- # local d=2 00:25:29.308 04:21:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:29.308 04:21:30 -- scripts/common.sh@354 -- # echo 2 00:25:29.308 04:21:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:29.308 04:21:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:29.308 04:21:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:29.308 04:21:30 -- scripts/common.sh@367 -- # return 0 00:25:29.309 04:21:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:29.309 04:21:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:29.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.309 --rc genhtml_branch_coverage=1 00:25:29.309 --rc genhtml_function_coverage=1 00:25:29.309 --rc genhtml_legend=1 00:25:29.309 --rc geninfo_all_blocks=1 00:25:29.309 --rc geninfo_unexecuted_blocks=1 00:25:29.309 00:25:29.309 ' 00:25:29.309 04:21:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:29.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.309 --rc genhtml_branch_coverage=1 00:25:29.309 --rc genhtml_function_coverage=1 00:25:29.309 --rc genhtml_legend=1 00:25:29.309 --rc geninfo_all_blocks=1 00:25:29.309 --rc geninfo_unexecuted_blocks=1 00:25:29.309 00:25:29.309 ' 00:25:29.309 04:21:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:29.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.309 --rc genhtml_branch_coverage=1 00:25:29.309 --rc genhtml_function_coverage=1 00:25:29.309 --rc genhtml_legend=1 00:25:29.309 --rc geninfo_all_blocks=1 00:25:29.309 --rc geninfo_unexecuted_blocks=1 00:25:29.309 00:25:29.309 ' 00:25:29.309 04:21:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:29.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.309 --rc genhtml_branch_coverage=1 00:25:29.309 --rc genhtml_function_coverage=1 00:25:29.309 --rc genhtml_legend=1 00:25:29.309 --rc geninfo_all_blocks=1 00:25:29.309 --rc geninfo_unexecuted_blocks=1 00:25:29.309 00:25:29.309 ' 00:25:29.309 04:21:30 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:29.309 04:21:30 -- nvmf/common.sh@7 -- # uname -s 00:25:29.309 04:21:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:29.309 04:21:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:29.309 04:21:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:29.309 04:21:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:29.309 04:21:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:29.309 04:21:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:29.309 04:21:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:29.309 04:21:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:29.309 04:21:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:29.309 04:21:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:29.309 04:21:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:25:29.309 04:21:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:25:29.309 04:21:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:29.309 04:21:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:29.309 04:21:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:29.309 04:21:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:29.309 04:21:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.309 04:21:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.309 04:21:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.309 04:21:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.309 04:21:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.309 04:21:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.309 04:21:30 -- paths/export.sh@5 -- # export PATH 00:25:29.309 04:21:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.309 04:21:30 -- nvmf/common.sh@46 -- # : 0 00:25:29.309 04:21:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:29.309 04:21:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:29.309 04:21:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:29.309 04:21:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:29.309 04:21:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:29.309 04:21:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:29.309 04:21:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:29.309 04:21:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:29.309 04:21:30 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:29.309 04:21:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.309 04:21:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.309 04:21:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.309 04:21:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.309 04:21:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.309 04:21:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.309 04:21:30 -- paths/export.sh@5 -- # export PATH 00:25:29.309 04:21:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.309 04:21:30 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:29.309 04:21:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:29.309 04:21:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:29.309 04:21:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:29.309 04:21:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:29.309 04:21:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:29.309 04:21:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.309 04:21:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:29.309 04:21:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.309 04:21:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:29.309 04:21:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:29.309 04:21:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:29.309 04:21:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:29.309 04:21:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:29.309 04:21:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:29.309 04:21:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:29.309 04:21:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:29.309 04:21:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:29.309 04:21:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:29.309 04:21:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:29.309 04:21:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:29.309 04:21:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:29.309 04:21:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:29.309 04:21:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:29.309 04:21:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:29.309 04:21:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:29.309 04:21:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:29.309 04:21:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:29.309 04:21:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:29.309 Cannot find device "nvmf_tgt_br" 00:25:29.309 04:21:30 -- nvmf/common.sh@154 -- # true 00:25:29.309 04:21:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:29.309 Cannot find device "nvmf_tgt_br2" 00:25:29.309 04:21:30 -- nvmf/common.sh@155 -- # true 00:25:29.309 04:21:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:29.309 04:21:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:29.309 Cannot find device "nvmf_tgt_br" 00:25:29.309 04:21:30 -- nvmf/common.sh@157 -- # true 00:25:29.309 04:21:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:29.309 Cannot find device "nvmf_tgt_br2" 00:25:29.309 04:21:30 -- nvmf/common.sh@158 -- # true 00:25:29.309 04:21:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:29.309 04:21:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:29.309 04:21:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:29.309 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:29.309 04:21:31 -- nvmf/common.sh@161 -- # true 00:25:29.309 04:21:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:29.309 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:29.309 04:21:31 -- nvmf/common.sh@162 -- # true 00:25:29.309 04:21:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:29.310 04:21:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:29.310 04:21:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:29.310 04:21:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:29.310 04:21:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:29.310 04:21:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:29.569 04:21:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:29.569 04:21:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:29.569 04:21:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:29.569 04:21:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:29.569 04:21:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:29.569 04:21:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:29.569 04:21:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:29.569 04:21:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:29.569 04:21:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:29.569 04:21:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:29.569 04:21:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:29.569 04:21:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:29.569 04:21:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:29.569 04:21:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:29.569 04:21:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:29.569 04:21:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:29.569 04:21:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:29.569 04:21:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:29.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:25:29.569 00:25:29.569 --- 10.0.0.2 ping statistics --- 00:25:29.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.569 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:25:29.569 04:21:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:29.569 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:29.569 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:25:29.569 00:25:29.569 --- 10.0.0.3 ping statistics --- 00:25:29.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.569 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:25:29.569 04:21:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:29.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:25:29.569 00:25:29.569 --- 10.0.0.1 ping statistics --- 00:25:29.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.569 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:25:29.569 04:21:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.569 04:21:31 -- nvmf/common.sh@421 -- # return 0 00:25:29.569 04:21:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:29.569 04:21:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.569 04:21:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:29.569 04:21:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:29.569 04:21:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.569 04:21:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:29.569 04:21:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:29.569 04:21:31 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:29.569 04:21:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:29.569 04:21:31 -- common/autotest_common.sh@10 -- # set +x 00:25:29.569 04:21:31 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:29.569 04:21:31 -- common/autotest_common.sh@1519 -- # bdfs=() 00:25:29.569 04:21:31 -- common/autotest_common.sh@1519 -- # local bdfs 00:25:29.569 04:21:31 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:25:29.569 04:21:31 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:25:29.569 04:21:31 -- common/autotest_common.sh@1508 -- # bdfs=() 00:25:29.569 04:21:31 -- common/autotest_common.sh@1508 -- # local bdfs 00:25:29.569 04:21:31 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:29.569 04:21:31 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:29.569 04:21:31 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:25:29.569 04:21:31 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:25:29.569 04:21:31 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:29.569 04:21:31 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:25:29.569 04:21:31 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:25:29.569 04:21:31 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:25:29.569 04:21:31 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:29.569 04:21:31 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:29.569 04:21:31 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:29.828 04:21:31 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:25:29.828 04:21:31 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:29.828 04:21:31 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:29.828 04:21:31 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:30.087 04:21:31 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:25:30.087 04:21:31 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:30.087 04:21:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:30.087 04:21:31 -- common/autotest_common.sh@10 -- # set +x 00:25:30.087 04:21:31 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:30.087 04:21:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:30.087 04:21:31 -- common/autotest_common.sh@10 -- # set +x 00:25:30.087 04:21:31 -- target/identify_passthru.sh@31 -- # nvmfpid=101830 00:25:30.087 04:21:31 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:30.087 04:21:31 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:30.087 04:21:31 -- target/identify_passthru.sh@35 -- # waitforlisten 101830 00:25:30.087 04:21:31 -- common/autotest_common.sh@829 -- # '[' -z 101830 ']' 00:25:30.087 04:21:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.087 04:21:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:30.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.087 04:21:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.087 04:21:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:30.087 04:21:31 -- common/autotest_common.sh@10 -- # set +x 00:25:30.087 [2024-11-26 04:21:31.743632] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:30.087 [2024-11-26 04:21:31.743770] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.347 [2024-11-26 04:21:31.879296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:30.347 [2024-11-26 04:21:31.947770] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:30.347 [2024-11-26 04:21:31.948136] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.347 [2024-11-26 04:21:31.948233] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.347 [2024-11-26 04:21:31.948356] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.347 [2024-11-26 04:21:31.948631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.347 [2024-11-26 04:21:31.948780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:30.347 [2024-11-26 04:21:31.948811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:30.347 [2024-11-26 04:21:31.948816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.347 04:21:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:30.347 04:21:32 -- common/autotest_common.sh@862 -- # return 0 00:25:30.347 04:21:32 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:30.347 04:21:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.347 04:21:32 -- common/autotest_common.sh@10 -- # set +x 00:25:30.347 04:21:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.347 04:21:32 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:30.347 04:21:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.347 04:21:32 -- common/autotest_common.sh@10 -- # set +x 00:25:30.606 [2024-11-26 04:21:32.126183] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:30.606 04:21:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.606 04:21:32 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:30.606 04:21:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.606 04:21:32 -- common/autotest_common.sh@10 -- # set +x 00:25:30.606 [2024-11-26 04:21:32.136630] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.606 04:21:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.606 04:21:32 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:30.606 04:21:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:30.606 04:21:32 -- common/autotest_common.sh@10 -- # set +x 00:25:30.606 04:21:32 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:25:30.606 04:21:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.606 04:21:32 -- common/autotest_common.sh@10 -- # set +x 00:25:30.606 Nvme0n1 00:25:30.606 04:21:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.606 04:21:32 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:30.606 04:21:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.606 04:21:32 -- common/autotest_common.sh@10 -- # set +x 00:25:30.606 04:21:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.606 04:21:32 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:30.606 04:21:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.606 04:21:32 -- common/autotest_common.sh@10 -- # set +x 00:25:30.606 04:21:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.606 04:21:32 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:30.606 04:21:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.606 04:21:32 -- common/autotest_common.sh@10 -- # set +x 00:25:30.606 [2024-11-26 04:21:32.271986] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.606 04:21:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.606 04:21:32 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:30.606 04:21:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.606 04:21:32 -- common/autotest_common.sh@10 -- # set +x 00:25:30.606 [2024-11-26 04:21:32.279739] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:30.606 [ 00:25:30.606 { 00:25:30.606 "allow_any_host": true, 00:25:30.606 "hosts": [], 00:25:30.606 "listen_addresses": [], 00:25:30.606 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:30.606 "subtype": "Discovery" 00:25:30.606 }, 00:25:30.606 { 00:25:30.606 "allow_any_host": true, 00:25:30.606 "hosts": [], 00:25:30.606 "listen_addresses": [ 00:25:30.606 { 00:25:30.606 "adrfam": "IPv4", 00:25:30.606 "traddr": "10.0.0.2", 00:25:30.606 "transport": "TCP", 00:25:30.606 "trsvcid": "4420", 00:25:30.606 "trtype": "TCP" 00:25:30.606 } 00:25:30.606 ], 00:25:30.606 "max_cntlid": 65519, 00:25:30.606 "max_namespaces": 1, 00:25:30.606 "min_cntlid": 1, 00:25:30.606 "model_number": "SPDK bdev Controller", 00:25:30.606 "namespaces": [ 00:25:30.606 { 00:25:30.606 "bdev_name": "Nvme0n1", 00:25:30.606 "name": "Nvme0n1", 00:25:30.606 "nguid": "E6E0DDB7577B493BBD5AAE179832BE89", 00:25:30.606 "nsid": 1, 00:25:30.606 "uuid": "e6e0ddb7-577b-493b-bd5a-ae179832be89" 00:25:30.606 } 00:25:30.606 ], 00:25:30.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.606 "serial_number": "SPDK00000000000001", 00:25:30.606 "subtype": "NVMe" 00:25:30.606 } 00:25:30.606 ] 00:25:30.606 04:21:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.606 04:21:32 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:30.606 04:21:32 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:30.606 04:21:32 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:30.865 04:21:32 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:30.865 04:21:32 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:30.865 04:21:32 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:30.865 04:21:32 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:31.124 04:21:32 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:31.124 04:21:32 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:31.124 04:21:32 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:31.124 04:21:32 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:31.124 04:21:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.124 04:21:32 -- common/autotest_common.sh@10 -- # set +x 00:25:31.124 04:21:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.124 04:21:32 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:31.124 04:21:32 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:31.124 04:21:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:31.124 04:21:32 -- nvmf/common.sh@116 -- # sync 00:25:31.124 04:21:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:31.124 04:21:32 -- nvmf/common.sh@119 -- # set +e 00:25:31.124 04:21:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:31.124 04:21:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:31.124 rmmod nvme_tcp 00:25:31.124 rmmod nvme_fabrics 00:25:31.124 rmmod nvme_keyring 00:25:31.124 04:21:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:31.124 04:21:32 -- nvmf/common.sh@123 -- # set -e 00:25:31.124 04:21:32 -- nvmf/common.sh@124 -- # return 0 00:25:31.124 04:21:32 -- nvmf/common.sh@477 -- # '[' -n 101830 ']' 00:25:31.124 04:21:32 -- nvmf/common.sh@478 -- # killprocess 101830 00:25:31.124 04:21:32 -- common/autotest_common.sh@936 -- # '[' -z 101830 ']' 00:25:31.124 04:21:32 -- common/autotest_common.sh@940 -- # kill -0 101830 00:25:31.124 04:21:32 -- common/autotest_common.sh@941 -- # uname 00:25:31.124 04:21:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:31.124 04:21:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101830 00:25:31.383 killing process with pid 101830 00:25:31.383 04:21:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:31.383 04:21:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:31.383 04:21:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101830' 00:25:31.383 04:21:32 -- common/autotest_common.sh@955 -- # kill 101830 00:25:31.383 [2024-11-26 04:21:32.913246] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:31.383 04:21:32 -- common/autotest_common.sh@960 -- # wait 101830 00:25:31.642 04:21:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:31.642 04:21:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:31.642 04:21:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:31.642 04:21:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:31.642 04:21:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:31.642 04:21:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.642 04:21:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:31.642 04:21:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.642 04:21:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:31.642 ************************************ 00:25:31.642 END TEST nvmf_identify_passthru 00:25:31.642 ************************************ 00:25:31.642 00:25:31.642 real 0m2.534s 00:25:31.642 user 0m5.079s 00:25:31.642 sys 0m0.850s 00:25:31.642 04:21:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:31.642 04:21:33 -- common/autotest_common.sh@10 -- # set +x 00:25:31.642 04:21:33 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:31.642 04:21:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:31.642 04:21:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:31.642 04:21:33 -- common/autotest_common.sh@10 -- # set +x 00:25:31.642 ************************************ 00:25:31.642 START TEST nvmf_dif 00:25:31.642 ************************************ 00:25:31.642 04:21:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:31.642 * Looking for test storage... 00:25:31.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:31.642 04:21:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:31.642 04:21:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:31.642 04:21:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:31.901 04:21:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:31.901 04:21:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:31.901 04:21:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:31.901 04:21:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:31.901 04:21:33 -- scripts/common.sh@335 -- # IFS=.-: 00:25:31.901 04:21:33 -- scripts/common.sh@335 -- # read -ra ver1 00:25:31.901 04:21:33 -- scripts/common.sh@336 -- # IFS=.-: 00:25:31.901 04:21:33 -- scripts/common.sh@336 -- # read -ra ver2 00:25:31.901 04:21:33 -- scripts/common.sh@337 -- # local 'op=<' 00:25:31.901 04:21:33 -- scripts/common.sh@339 -- # ver1_l=2 00:25:31.901 04:21:33 -- scripts/common.sh@340 -- # ver2_l=1 00:25:31.901 04:21:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:31.901 04:21:33 -- scripts/common.sh@343 -- # case "$op" in 00:25:31.901 04:21:33 -- scripts/common.sh@344 -- # : 1 00:25:31.901 04:21:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:31.901 04:21:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:31.901 04:21:33 -- scripts/common.sh@364 -- # decimal 1 00:25:31.901 04:21:33 -- scripts/common.sh@352 -- # local d=1 00:25:31.901 04:21:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:31.901 04:21:33 -- scripts/common.sh@354 -- # echo 1 00:25:31.901 04:21:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:31.901 04:21:33 -- scripts/common.sh@365 -- # decimal 2 00:25:31.901 04:21:33 -- scripts/common.sh@352 -- # local d=2 00:25:31.901 04:21:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:31.901 04:21:33 -- scripts/common.sh@354 -- # echo 2 00:25:31.901 04:21:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:31.901 04:21:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:31.901 04:21:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:31.901 04:21:33 -- scripts/common.sh@367 -- # return 0 00:25:31.901 04:21:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:31.901 04:21:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:31.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.901 --rc genhtml_branch_coverage=1 00:25:31.901 --rc genhtml_function_coverage=1 00:25:31.901 --rc genhtml_legend=1 00:25:31.901 --rc geninfo_all_blocks=1 00:25:31.901 --rc geninfo_unexecuted_blocks=1 00:25:31.901 00:25:31.901 ' 00:25:31.901 04:21:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:31.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.901 --rc genhtml_branch_coverage=1 00:25:31.901 --rc genhtml_function_coverage=1 00:25:31.901 --rc genhtml_legend=1 00:25:31.901 --rc geninfo_all_blocks=1 00:25:31.901 --rc geninfo_unexecuted_blocks=1 00:25:31.901 00:25:31.901 ' 00:25:31.901 04:21:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:31.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.901 --rc genhtml_branch_coverage=1 00:25:31.901 --rc genhtml_function_coverage=1 00:25:31.901 --rc genhtml_legend=1 00:25:31.901 --rc geninfo_all_blocks=1 00:25:31.901 --rc geninfo_unexecuted_blocks=1 00:25:31.901 00:25:31.901 ' 00:25:31.901 04:21:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:31.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.901 --rc genhtml_branch_coverage=1 00:25:31.901 --rc genhtml_function_coverage=1 00:25:31.901 --rc genhtml_legend=1 00:25:31.901 --rc geninfo_all_blocks=1 00:25:31.901 --rc geninfo_unexecuted_blocks=1 00:25:31.901 00:25:31.901 ' 00:25:31.901 04:21:33 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:31.901 04:21:33 -- nvmf/common.sh@7 -- # uname -s 00:25:31.901 04:21:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.901 04:21:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.901 04:21:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.901 04:21:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.901 04:21:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.901 04:21:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.901 04:21:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.901 04:21:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.901 04:21:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.901 04:21:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.901 04:21:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:25:31.901 04:21:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:25:31.901 04:21:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.901 04:21:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.901 04:21:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:31.901 04:21:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:31.901 04:21:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.901 04:21:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.901 04:21:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.901 04:21:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.902 04:21:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.902 04:21:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.902 04:21:33 -- paths/export.sh@5 -- # export PATH 00:25:31.902 04:21:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.902 04:21:33 -- nvmf/common.sh@46 -- # : 0 00:25:31.902 04:21:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:31.902 04:21:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:31.902 04:21:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:31.902 04:21:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.902 04:21:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.902 04:21:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:31.902 04:21:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:31.902 04:21:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:31.902 04:21:33 -- target/dif.sh@15 -- # NULL_META=16 00:25:31.902 04:21:33 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:31.902 04:21:33 -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:31.902 04:21:33 -- target/dif.sh@15 -- # NULL_DIF=1 00:25:31.902 04:21:33 -- target/dif.sh@135 -- # nvmftestinit 00:25:31.902 04:21:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:31.902 04:21:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:31.902 04:21:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:31.902 04:21:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:31.902 04:21:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:31.902 04:21:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.902 04:21:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:31.902 04:21:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.902 04:21:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:31.902 04:21:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:31.902 04:21:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:31.902 04:21:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:31.902 04:21:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:31.902 04:21:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:31.902 04:21:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:31.902 04:21:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:31.902 04:21:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:31.902 04:21:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:31.902 04:21:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:31.902 04:21:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:31.902 04:21:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:31.902 04:21:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:31.902 04:21:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:31.902 04:21:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:31.902 04:21:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:31.902 04:21:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:31.902 04:21:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:31.902 04:21:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:31.902 Cannot find device "nvmf_tgt_br" 00:25:31.902 04:21:33 -- nvmf/common.sh@154 -- # true 00:25:31.902 04:21:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:31.902 Cannot find device "nvmf_tgt_br2" 00:25:31.902 04:21:33 -- nvmf/common.sh@155 -- # true 00:25:31.902 04:21:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:31.902 04:21:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:31.902 Cannot find device "nvmf_tgt_br" 00:25:31.902 04:21:33 -- nvmf/common.sh@157 -- # true 00:25:31.902 04:21:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:31.902 Cannot find device "nvmf_tgt_br2" 00:25:31.902 04:21:33 -- nvmf/common.sh@158 -- # true 00:25:31.902 04:21:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:31.902 04:21:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:31.902 04:21:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:31.902 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:31.902 04:21:33 -- nvmf/common.sh@161 -- # true 00:25:31.902 04:21:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:31.902 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:31.902 04:21:33 -- nvmf/common.sh@162 -- # true 00:25:31.902 04:21:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:31.902 04:21:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:31.902 04:21:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:31.902 04:21:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:31.902 04:21:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:31.902 04:21:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:32.161 04:21:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:32.161 04:21:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:32.161 04:21:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:32.161 04:21:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:32.161 04:21:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:32.161 04:21:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:32.161 04:21:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:32.161 04:21:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:32.161 04:21:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:32.161 04:21:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:32.161 04:21:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:32.161 04:21:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:32.161 04:21:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:32.161 04:21:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:32.161 04:21:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:32.161 04:21:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:32.161 04:21:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:32.161 04:21:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:32.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:25:32.161 00:25:32.161 --- 10.0.0.2 ping statistics --- 00:25:32.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.161 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:25:32.161 04:21:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:32.161 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:32.161 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:25:32.161 00:25:32.161 --- 10.0.0.3 ping statistics --- 00:25:32.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.161 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:25:32.161 04:21:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:32.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:25:32.161 00:25:32.161 --- 10.0.0.1 ping statistics --- 00:25:32.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.161 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:25:32.161 04:21:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.161 04:21:33 -- nvmf/common.sh@421 -- # return 0 00:25:32.161 04:21:33 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:25:32.161 04:21:33 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:32.420 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:32.420 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:32.420 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:32.679 04:21:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.679 04:21:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:32.679 04:21:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:32.679 04:21:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.679 04:21:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:32.679 04:21:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:32.679 04:21:34 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:32.679 04:21:34 -- target/dif.sh@137 -- # nvmfappstart 00:25:32.679 04:21:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:32.679 04:21:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:32.679 04:21:34 -- common/autotest_common.sh@10 -- # set +x 00:25:32.679 04:21:34 -- nvmf/common.sh@469 -- # nvmfpid=102169 00:25:32.679 04:21:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:32.679 04:21:34 -- nvmf/common.sh@470 -- # waitforlisten 102169 00:25:32.679 04:21:34 -- common/autotest_common.sh@829 -- # '[' -z 102169 ']' 00:25:32.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.679 04:21:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.679 04:21:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:32.679 04:21:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.679 04:21:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:32.679 04:21:34 -- common/autotest_common.sh@10 -- # set +x 00:25:32.679 [2024-11-26 04:21:34.322545] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:32.679 [2024-11-26 04:21:34.322648] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:32.938 [2024-11-26 04:21:34.465027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.938 [2024-11-26 04:21:34.551299] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:32.938 [2024-11-26 04:21:34.551484] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:32.938 [2024-11-26 04:21:34.551503] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:32.938 [2024-11-26 04:21:34.551515] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:32.938 [2024-11-26 04:21:34.551557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.875 04:21:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:33.875 04:21:35 -- common/autotest_common.sh@862 -- # return 0 00:25:33.875 04:21:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:33.875 04:21:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:33.876 04:21:35 -- common/autotest_common.sh@10 -- # set +x 00:25:33.876 04:21:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.876 04:21:35 -- target/dif.sh@139 -- # create_transport 00:25:33.876 04:21:35 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:33.876 04:21:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.876 04:21:35 -- common/autotest_common.sh@10 -- # set +x 00:25:33.876 [2024-11-26 04:21:35.372605] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.876 04:21:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.876 04:21:35 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:33.876 04:21:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:33.876 04:21:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:33.876 04:21:35 -- common/autotest_common.sh@10 -- # set +x 00:25:33.876 ************************************ 00:25:33.876 START TEST fio_dif_1_default 00:25:33.876 ************************************ 00:25:33.876 04:21:35 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:25:33.876 04:21:35 -- target/dif.sh@86 -- # create_subsystems 0 00:25:33.876 04:21:35 -- target/dif.sh@28 -- # local sub 00:25:33.876 04:21:35 -- target/dif.sh@30 -- # for sub in "$@" 00:25:33.876 04:21:35 -- target/dif.sh@31 -- # create_subsystem 0 00:25:33.876 04:21:35 -- target/dif.sh@18 -- # local sub_id=0 00:25:33.876 04:21:35 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:33.876 04:21:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.876 04:21:35 -- common/autotest_common.sh@10 -- # set +x 00:25:33.876 bdev_null0 00:25:33.876 04:21:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.876 04:21:35 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:33.876 04:21:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.876 04:21:35 -- common/autotest_common.sh@10 -- # set +x 00:25:33.876 04:21:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.876 04:21:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:33.876 04:21:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.876 04:21:35 -- common/autotest_common.sh@10 -- # set +x 00:25:33.876 04:21:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.876 04:21:35 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:33.876 04:21:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.876 04:21:35 -- common/autotest_common.sh@10 -- # set +x 00:25:33.876 [2024-11-26 04:21:35.416764] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.876 04:21:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.876 04:21:35 -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:33.876 04:21:35 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:33.876 04:21:35 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:33.876 04:21:35 -- nvmf/common.sh@520 -- # config=() 00:25:33.876 04:21:35 -- nvmf/common.sh@520 -- # local subsystem config 00:25:33.876 04:21:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:33.876 04:21:35 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:33.876 04:21:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:33.876 { 00:25:33.876 "params": { 00:25:33.876 "name": "Nvme$subsystem", 00:25:33.876 "trtype": "$TEST_TRANSPORT", 00:25:33.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:33.876 "adrfam": "ipv4", 00:25:33.876 "trsvcid": "$NVMF_PORT", 00:25:33.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:33.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:33.876 "hdgst": ${hdgst:-false}, 00:25:33.876 "ddgst": ${ddgst:-false} 00:25:33.876 }, 00:25:33.876 "method": "bdev_nvme_attach_controller" 00:25:33.876 } 00:25:33.876 EOF 00:25:33.876 )") 00:25:33.876 04:21:35 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:33.876 04:21:35 -- target/dif.sh@82 -- # gen_fio_conf 00:25:33.876 04:21:35 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:33.876 04:21:35 -- target/dif.sh@54 -- # local file 00:25:33.876 04:21:35 -- target/dif.sh@56 -- # cat 00:25:33.876 04:21:35 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:33.876 04:21:35 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:33.876 04:21:35 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:33.876 04:21:35 -- common/autotest_common.sh@1330 -- # shift 00:25:33.876 04:21:35 -- nvmf/common.sh@542 -- # cat 00:25:33.876 04:21:35 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:33.876 04:21:35 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:33.876 04:21:35 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:33.876 04:21:35 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:33.876 04:21:35 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:33.876 04:21:35 -- target/dif.sh@72 -- # (( file <= files )) 00:25:33.876 04:21:35 -- nvmf/common.sh@544 -- # jq . 00:25:33.876 04:21:35 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:33.876 04:21:35 -- nvmf/common.sh@545 -- # IFS=, 00:25:33.876 04:21:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:33.876 "params": { 00:25:33.876 "name": "Nvme0", 00:25:33.876 "trtype": "tcp", 00:25:33.876 "traddr": "10.0.0.2", 00:25:33.876 "adrfam": "ipv4", 00:25:33.876 "trsvcid": "4420", 00:25:33.876 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:33.876 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:33.876 "hdgst": false, 00:25:33.876 "ddgst": false 00:25:33.876 }, 00:25:33.876 "method": "bdev_nvme_attach_controller" 00:25:33.876 }' 00:25:33.876 04:21:35 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:33.876 04:21:35 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:33.876 04:21:35 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:33.876 04:21:35 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:33.876 04:21:35 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:33.876 04:21:35 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:33.876 04:21:35 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:33.876 04:21:35 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:33.876 04:21:35 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:33.876 04:21:35 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:34.135 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:34.135 fio-3.35 00:25:34.135 Starting 1 thread 00:25:34.394 [2024-11-26 04:21:36.058562] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:34.394 [2024-11-26 04:21:36.058649] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:46.601 00:25:46.601 filename0: (groupid=0, jobs=1): err= 0: pid=102259: Tue Nov 26 04:21:46 2024 00:25:46.601 read: IOPS=5984, BW=23.4MiB/s (24.5MB/s)(234MiB/10008msec) 00:25:46.601 slat (nsec): min=5753, max=51680, avg=6700.37, stdev=1970.85 00:25:46.601 clat (usec): min=347, max=41455, avg=648.57, stdev=3295.41 00:25:46.601 lat (usec): min=353, max=41463, avg=655.27, stdev=3295.49 00:25:46.601 clat percentiles (usec): 00:25:46.601 | 1.00th=[ 355], 5.00th=[ 355], 10.00th=[ 359], 20.00th=[ 363], 00:25:46.601 | 30.00th=[ 367], 40.00th=[ 371], 50.00th=[ 375], 60.00th=[ 379], 00:25:46.601 | 70.00th=[ 383], 80.00th=[ 392], 90.00th=[ 404], 95.00th=[ 420], 00:25:46.601 | 99.00th=[ 478], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:25:46.601 | 99.99th=[41157] 00:25:46.601 bw ( KiB/s): min=13856, max=36864, per=100.00%, avg=23953.60, stdev=5787.02, samples=20 00:25:46.601 iops : min= 3464, max= 9216, avg=5988.40, stdev=1446.75, samples=20 00:25:46.601 lat (usec) : 500=99.18%, 750=0.12%, 1000=0.03% 00:25:46.601 lat (msec) : 10=0.01%, 50=0.66% 00:25:46.601 cpu : usr=86.47%, sys=11.22%, ctx=24, majf=0, minf=0 00:25:46.601 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:46.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.601 issued rwts: total=59888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.601 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:46.601 00:25:46.601 Run status group 0 (all jobs): 00:25:46.601 READ: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=234MiB (245MB), run=10008-10008msec 00:25:46.601 04:21:46 -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:46.601 04:21:46 -- target/dif.sh@43 -- # local sub 00:25:46.601 04:21:46 -- target/dif.sh@45 -- # for sub in "$@" 00:25:46.601 04:21:46 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:46.601 04:21:46 -- target/dif.sh@36 -- # local sub_id=0 00:25:46.601 04:21:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:46.601 04:21:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.601 04:21:46 -- common/autotest_common.sh@10 -- # set +x 00:25:46.601 04:21:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.601 04:21:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:46.601 04:21:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.601 04:21:46 -- common/autotest_common.sh@10 -- # set +x 00:25:46.601 ************************************ 00:25:46.601 END TEST fio_dif_1_default 00:25:46.601 ************************************ 00:25:46.601 04:21:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.601 00:25:46.601 real 0m11.016s 00:25:46.601 user 0m9.281s 00:25:46.601 sys 0m1.416s 00:25:46.601 04:21:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:46.601 04:21:46 -- common/autotest_common.sh@10 -- # set +x 00:25:46.601 04:21:46 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:46.601 04:21:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:46.601 04:21:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:46.601 04:21:46 -- common/autotest_common.sh@10 -- # set +x 00:25:46.601 ************************************ 00:25:46.601 START TEST fio_dif_1_multi_subsystems 00:25:46.601 ************************************ 00:25:46.601 04:21:46 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:25:46.601 04:21:46 -- target/dif.sh@92 -- # local files=1 00:25:46.601 04:21:46 -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:46.601 04:21:46 -- target/dif.sh@28 -- # local sub 00:25:46.601 04:21:46 -- target/dif.sh@30 -- # for sub in "$@" 00:25:46.601 04:21:46 -- target/dif.sh@31 -- # create_subsystem 0 00:25:46.601 04:21:46 -- target/dif.sh@18 -- # local sub_id=0 00:25:46.601 04:21:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:46.601 04:21:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.601 04:21:46 -- common/autotest_common.sh@10 -- # set +x 00:25:46.601 bdev_null0 00:25:46.601 04:21:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.601 04:21:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:46.601 04:21:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.601 04:21:46 -- common/autotest_common.sh@10 -- # set +x 00:25:46.601 04:21:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.601 04:21:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:46.601 04:21:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.601 04:21:46 -- common/autotest_common.sh@10 -- # set +x 00:25:46.601 04:21:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.601 04:21:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:46.601 04:21:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.601 04:21:46 -- common/autotest_common.sh@10 -- # set +x 00:25:46.601 [2024-11-26 04:21:46.491664] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.601 04:21:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.601 04:21:46 -- target/dif.sh@30 -- # for sub in "$@" 00:25:46.601 04:21:46 -- target/dif.sh@31 -- # create_subsystem 1 00:25:46.601 04:21:46 -- target/dif.sh@18 -- # local sub_id=1 00:25:46.601 04:21:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:46.601 04:21:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.601 04:21:46 -- common/autotest_common.sh@10 -- # set +x 00:25:46.601 bdev_null1 00:25:46.601 04:21:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.601 04:21:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:46.601 04:21:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.601 04:21:46 -- common/autotest_common.sh@10 -- # set +x 00:25:46.601 04:21:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.601 04:21:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:46.601 04:21:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.601 04:21:46 -- common/autotest_common.sh@10 -- # set +x 00:25:46.601 04:21:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.601 04:21:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:46.601 04:21:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.601 04:21:46 -- common/autotest_common.sh@10 -- # set +x 00:25:46.601 04:21:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.601 04:21:46 -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:46.601 04:21:46 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:46.601 04:21:46 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:46.601 04:21:46 -- nvmf/common.sh@520 -- # config=() 00:25:46.601 04:21:46 -- nvmf/common.sh@520 -- # local subsystem config 00:25:46.601 04:21:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:46.601 04:21:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:46.601 { 00:25:46.601 "params": { 00:25:46.601 "name": "Nvme$subsystem", 00:25:46.601 "trtype": "$TEST_TRANSPORT", 00:25:46.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.601 "adrfam": "ipv4", 00:25:46.601 "trsvcid": "$NVMF_PORT", 00:25:46.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.601 "hdgst": ${hdgst:-false}, 00:25:46.601 "ddgst": ${ddgst:-false} 00:25:46.601 }, 00:25:46.601 "method": "bdev_nvme_attach_controller" 00:25:46.601 } 00:25:46.601 EOF 00:25:46.601 )") 00:25:46.601 04:21:46 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:46.601 04:21:46 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:46.601 04:21:46 -- target/dif.sh@82 -- # gen_fio_conf 00:25:46.601 04:21:46 -- target/dif.sh@54 -- # local file 00:25:46.601 04:21:46 -- target/dif.sh@56 -- # cat 00:25:46.601 04:21:46 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:46.602 04:21:46 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:46.602 04:21:46 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:46.602 04:21:46 -- nvmf/common.sh@542 -- # cat 00:25:46.602 04:21:46 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:46.602 04:21:46 -- common/autotest_common.sh@1330 -- # shift 00:25:46.602 04:21:46 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:46.602 04:21:46 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:46.602 04:21:46 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:46.602 04:21:46 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:46.602 04:21:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:46.602 04:21:46 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:46.602 04:21:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:46.602 { 00:25:46.602 "params": { 00:25:46.602 "name": "Nvme$subsystem", 00:25:46.602 "trtype": "$TEST_TRANSPORT", 00:25:46.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.602 "adrfam": "ipv4", 00:25:46.602 "trsvcid": "$NVMF_PORT", 00:25:46.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.602 "hdgst": ${hdgst:-false}, 00:25:46.602 "ddgst": ${ddgst:-false} 00:25:46.602 }, 00:25:46.602 "method": "bdev_nvme_attach_controller" 00:25:46.602 } 00:25:46.602 EOF 00:25:46.602 )") 00:25:46.602 04:21:46 -- target/dif.sh@72 -- # (( file <= files )) 00:25:46.602 04:21:46 -- target/dif.sh@73 -- # cat 00:25:46.602 04:21:46 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:46.602 04:21:46 -- nvmf/common.sh@542 -- # cat 00:25:46.602 04:21:46 -- target/dif.sh@72 -- # (( file++ )) 00:25:46.602 04:21:46 -- target/dif.sh@72 -- # (( file <= files )) 00:25:46.602 04:21:46 -- nvmf/common.sh@544 -- # jq . 00:25:46.602 04:21:46 -- nvmf/common.sh@545 -- # IFS=, 00:25:46.602 04:21:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:46.602 "params": { 00:25:46.602 "name": "Nvme0", 00:25:46.602 "trtype": "tcp", 00:25:46.602 "traddr": "10.0.0.2", 00:25:46.602 "adrfam": "ipv4", 00:25:46.602 "trsvcid": "4420", 00:25:46.602 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:46.602 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:46.602 "hdgst": false, 00:25:46.602 "ddgst": false 00:25:46.602 }, 00:25:46.602 "method": "bdev_nvme_attach_controller" 00:25:46.602 },{ 00:25:46.602 "params": { 00:25:46.602 "name": "Nvme1", 00:25:46.602 "trtype": "tcp", 00:25:46.602 "traddr": "10.0.0.2", 00:25:46.602 "adrfam": "ipv4", 00:25:46.602 "trsvcid": "4420", 00:25:46.602 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:46.602 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:46.602 "hdgst": false, 00:25:46.602 "ddgst": false 00:25:46.602 }, 00:25:46.602 "method": "bdev_nvme_attach_controller" 00:25:46.602 }' 00:25:46.602 04:21:46 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:46.602 04:21:46 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:46.602 04:21:46 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:46.602 04:21:46 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:46.602 04:21:46 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:46.602 04:21:46 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:46.602 04:21:46 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:46.602 04:21:46 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:46.602 04:21:46 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:46.602 04:21:46 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:46.602 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:46.602 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:46.602 fio-3.35 00:25:46.602 Starting 2 threads 00:25:46.602 [2024-11-26 04:21:47.275078] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:46.602 [2024-11-26 04:21:47.275144] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:56.592 00:25:56.592 filename0: (groupid=0, jobs=1): err= 0: pid=102420: Tue Nov 26 04:21:57 2024 00:25:56.592 read: IOPS=189, BW=759KiB/s (777kB/s)(7600KiB/10017msec) 00:25:56.592 slat (nsec): min=6010, max=39454, avg=8608.43, stdev=4106.17 00:25:56.592 clat (usec): min=374, max=41343, avg=21060.44, stdev=20258.76 00:25:56.592 lat (usec): min=380, max=41353, avg=21069.05, stdev=20258.67 00:25:56.592 clat percentiles (usec): 00:25:56.592 | 1.00th=[ 379], 5.00th=[ 383], 10.00th=[ 392], 20.00th=[ 400], 00:25:56.592 | 30.00th=[ 408], 40.00th=[ 429], 50.00th=[40633], 60.00th=[40633], 00:25:56.592 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:56.592 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:25:56.592 | 99.99th=[41157] 00:25:56.592 bw ( KiB/s): min= 576, max= 1056, per=33.07%, avg=758.40, stdev=125.49, samples=20 00:25:56.592 iops : min= 144, max= 264, avg=189.60, stdev=31.37, samples=20 00:25:56.592 lat (usec) : 500=46.68%, 750=2.05%, 1000=0.11% 00:25:56.592 lat (msec) : 2=0.21%, 50=50.95% 00:25:56.592 cpu : usr=95.14%, sys=4.37%, ctx=30, majf=0, minf=9 00:25:56.592 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:56.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.592 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.592 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:56.592 filename1: (groupid=0, jobs=1): err= 0: pid=102421: Tue Nov 26 04:21:57 2024 00:25:56.592 read: IOPS=383, BW=1535KiB/s (1572kB/s)(15.0MiB/10005msec) 00:25:56.592 slat (nsec): min=5881, max=33152, avg=7576.81, stdev=2765.16 00:25:56.592 clat (usec): min=357, max=41426, avg=10398.48, stdev=17471.38 00:25:56.592 lat (usec): min=363, max=41435, avg=10406.05, stdev=17471.47 00:25:56.592 clat percentiles (usec): 00:25:56.592 | 1.00th=[ 363], 5.00th=[ 367], 10.00th=[ 371], 20.00th=[ 379], 00:25:56.592 | 30.00th=[ 383], 40.00th=[ 392], 50.00th=[ 396], 60.00th=[ 408], 00:25:56.592 | 70.00th=[ 433], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:25:56.592 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:25:56.592 | 99.99th=[41681] 00:25:56.592 bw ( KiB/s): min= 832, max= 2304, per=65.88%, avg=1510.74, stdev=398.05, samples=19 00:25:56.592 iops : min= 208, max= 576, avg=377.68, stdev=99.51, samples=19 00:25:56.592 lat (usec) : 500=74.09%, 750=0.94%, 1000=0.18% 00:25:56.593 lat (msec) : 2=0.10%, 50=24.69% 00:25:56.593 cpu : usr=95.42%, sys=4.12%, ctx=14, majf=0, minf=0 00:25:56.593 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:56.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.593 issued rwts: total=3840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.593 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:56.593 00:25:56.593 Run status group 0 (all jobs): 00:25:56.593 READ: bw=2292KiB/s (2347kB/s), 759KiB/s-1535KiB/s (777kB/s-1572kB/s), io=22.4MiB (23.5MB), run=10005-10017msec 00:25:56.593 04:21:57 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:56.593 04:21:57 -- target/dif.sh@43 -- # local sub 00:25:56.593 04:21:57 -- target/dif.sh@45 -- # for sub in "$@" 00:25:56.593 04:21:57 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:56.593 04:21:57 -- target/dif.sh@36 -- # local sub_id=0 00:25:56.593 04:21:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:56.593 04:21:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.593 04:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:56.593 04:21:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.593 04:21:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:56.593 04:21:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.593 04:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:56.593 04:21:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.593 04:21:57 -- target/dif.sh@45 -- # for sub in "$@" 00:25:56.593 04:21:57 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:56.593 04:21:57 -- target/dif.sh@36 -- # local sub_id=1 00:25:56.593 04:21:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:56.593 04:21:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.593 04:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:56.593 04:21:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.593 04:21:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:56.593 04:21:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.593 04:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:56.593 ************************************ 00:25:56.593 END TEST fio_dif_1_multi_subsystems 00:25:56.593 ************************************ 00:25:56.593 04:21:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.593 00:25:56.593 real 0m11.186s 00:25:56.593 user 0m19.893s 00:25:56.593 sys 0m1.129s 00:25:56.593 04:21:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:56.593 04:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:56.593 04:21:57 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:56.593 04:21:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:56.593 04:21:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:56.593 04:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:56.593 ************************************ 00:25:56.593 START TEST fio_dif_rand_params 00:25:56.593 ************************************ 00:25:56.593 04:21:57 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:25:56.593 04:21:57 -- target/dif.sh@100 -- # local NULL_DIF 00:25:56.593 04:21:57 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:56.593 04:21:57 -- target/dif.sh@103 -- # NULL_DIF=3 00:25:56.593 04:21:57 -- target/dif.sh@103 -- # bs=128k 00:25:56.593 04:21:57 -- target/dif.sh@103 -- # numjobs=3 00:25:56.593 04:21:57 -- target/dif.sh@103 -- # iodepth=3 00:25:56.593 04:21:57 -- target/dif.sh@103 -- # runtime=5 00:25:56.593 04:21:57 -- target/dif.sh@105 -- # create_subsystems 0 00:25:56.593 04:21:57 -- target/dif.sh@28 -- # local sub 00:25:56.593 04:21:57 -- target/dif.sh@30 -- # for sub in "$@" 00:25:56.593 04:21:57 -- target/dif.sh@31 -- # create_subsystem 0 00:25:56.593 04:21:57 -- target/dif.sh@18 -- # local sub_id=0 00:25:56.593 04:21:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:56.593 04:21:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.593 04:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:56.593 bdev_null0 00:25:56.593 04:21:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.593 04:21:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:56.593 04:21:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.593 04:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:56.593 04:21:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.593 04:21:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:56.593 04:21:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.593 04:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:56.593 04:21:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.593 04:21:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:56.593 04:21:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.593 04:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:56.593 [2024-11-26 04:21:57.740273] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:56.593 04:21:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.593 04:21:57 -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:56.593 04:21:57 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:56.593 04:21:57 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:56.593 04:21:57 -- nvmf/common.sh@520 -- # config=() 00:25:56.593 04:21:57 -- nvmf/common.sh@520 -- # local subsystem config 00:25:56.593 04:21:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:56.593 04:21:57 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:56.593 04:21:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:56.593 { 00:25:56.593 "params": { 00:25:56.593 "name": "Nvme$subsystem", 00:25:56.593 "trtype": "$TEST_TRANSPORT", 00:25:56.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:56.593 "adrfam": "ipv4", 00:25:56.593 "trsvcid": "$NVMF_PORT", 00:25:56.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:56.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:56.593 "hdgst": ${hdgst:-false}, 00:25:56.593 "ddgst": ${ddgst:-false} 00:25:56.593 }, 00:25:56.593 "method": "bdev_nvme_attach_controller" 00:25:56.593 } 00:25:56.593 EOF 00:25:56.593 )") 00:25:56.593 04:21:57 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:56.593 04:21:57 -- target/dif.sh@82 -- # gen_fio_conf 00:25:56.593 04:21:57 -- target/dif.sh@54 -- # local file 00:25:56.593 04:21:57 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:56.593 04:21:57 -- target/dif.sh@56 -- # cat 00:25:56.593 04:21:57 -- nvmf/common.sh@542 -- # cat 00:25:56.593 04:21:57 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:56.593 04:21:57 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:56.593 04:21:57 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:56.593 04:21:57 -- common/autotest_common.sh@1330 -- # shift 00:25:56.593 04:21:57 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:56.593 04:21:57 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:56.593 04:21:57 -- nvmf/common.sh@544 -- # jq . 00:25:56.593 04:21:57 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:56.593 04:21:57 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:56.593 04:21:57 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:56.593 04:21:57 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:56.593 04:21:57 -- nvmf/common.sh@545 -- # IFS=, 00:25:56.593 04:21:57 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:56.593 "params": { 00:25:56.593 "name": "Nvme0", 00:25:56.593 "trtype": "tcp", 00:25:56.593 "traddr": "10.0.0.2", 00:25:56.593 "adrfam": "ipv4", 00:25:56.593 "trsvcid": "4420", 00:25:56.593 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:56.593 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:56.593 "hdgst": false, 00:25:56.593 "ddgst": false 00:25:56.593 }, 00:25:56.593 "method": "bdev_nvme_attach_controller" 00:25:56.593 }' 00:25:56.593 04:21:57 -- target/dif.sh@72 -- # (( file <= files )) 00:25:56.593 04:21:57 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:56.593 04:21:57 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:56.593 04:21:57 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:56.593 04:21:57 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:56.593 04:21:57 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:56.593 04:21:57 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:56.593 04:21:57 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:56.593 04:21:57 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:56.593 04:21:57 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:56.593 04:21:57 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:56.593 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:56.593 ... 00:25:56.593 fio-3.35 00:25:56.593 Starting 3 threads 00:25:56.852 [2024-11-26 04:21:58.381209] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:56.852 [2024-11-26 04:21:58.381279] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:02.164 00:26:02.164 filename0: (groupid=0, jobs=1): err= 0: pid=102577: Tue Nov 26 04:22:03 2024 00:26:02.164 read: IOPS=265, BW=33.2MiB/s (34.8MB/s)(166MiB/5003msec) 00:26:02.164 slat (nsec): min=5909, max=53605, avg=12607.60, stdev=5838.87 00:26:02.164 clat (usec): min=3138, max=52064, avg=11273.19, stdev=10030.54 00:26:02.164 lat (usec): min=3148, max=52071, avg=11285.80, stdev=10030.25 00:26:02.164 clat percentiles (usec): 00:26:02.164 | 1.00th=[ 3523], 5.00th=[ 5276], 10.00th=[ 5997], 20.00th=[ 6521], 00:26:02.164 | 30.00th=[ 8160], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[ 9896], 00:26:02.164 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11207], 95.00th=[47449], 00:26:02.164 | 99.00th=[50594], 99.50th=[51119], 99.90th=[52167], 99.95th=[52167], 00:26:02.164 | 99.99th=[52167] 00:26:02.164 bw ( KiB/s): min=24320, max=39936, per=30.03%, avg=33621.33, stdev=5611.60, samples=9 00:26:02.164 iops : min= 190, max= 312, avg=262.67, stdev=43.84, samples=9 00:26:02.164 lat (msec) : 4=3.46%, 10=60.35%, 20=29.87%, 50=4.21%, 100=2.11% 00:26:02.164 cpu : usr=93.98%, sys=4.46%, ctx=4, majf=0, minf=0 00:26:02.164 IO depths : 1=3.5%, 2=96.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:02.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.164 issued rwts: total=1329,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.164 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:02.164 filename0: (groupid=0, jobs=1): err= 0: pid=102578: Tue Nov 26 04:22:03 2024 00:26:02.164 read: IOPS=331, BW=41.4MiB/s (43.4MB/s)(207MiB/5002msec) 00:26:02.164 slat (nsec): min=5958, max=70238, avg=12568.18, stdev=7027.54 00:26:02.164 clat (usec): min=2964, max=50298, avg=9024.21, stdev=5146.37 00:26:02.164 lat (usec): min=2973, max=50304, avg=9036.78, stdev=5147.16 00:26:02.164 clat percentiles (usec): 00:26:02.164 | 1.00th=[ 3294], 5.00th=[ 3359], 10.00th=[ 3425], 20.00th=[ 4948], 00:26:02.164 | 30.00th=[ 6980], 40.00th=[ 7308], 50.00th=[ 8586], 60.00th=[10945], 00:26:02.164 | 70.00th=[11469], 80.00th=[11994], 90.00th=[12518], 95.00th=[12780], 00:26:02.164 | 99.00th=[43779], 99.50th=[46924], 99.90th=[48497], 99.95th=[50070], 00:26:02.164 | 99.99th=[50070] 00:26:02.164 bw ( KiB/s): min=33090, max=48384, per=38.24%, avg=42816.22, stdev=4734.19, samples=9 00:26:02.164 iops : min= 258, max= 378, avg=334.44, stdev=37.12, samples=9 00:26:02.164 lat (msec) : 4=18.03%, 10=35.52%, 20=45.36%, 50=1.03%, 100=0.06% 00:26:02.164 cpu : usr=94.38%, sys=4.10%, ctx=4, majf=0, minf=9 00:26:02.164 IO depths : 1=21.4%, 2=78.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:02.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.164 issued rwts: total=1658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.164 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:02.164 filename0: (groupid=0, jobs=1): err= 0: pid=102579: Tue Nov 26 04:22:03 2024 00:26:02.164 read: IOPS=281, BW=35.1MiB/s (36.9MB/s)(177MiB/5032msec) 00:26:02.164 slat (nsec): min=5945, max=50789, avg=11852.16, stdev=5632.45 00:26:02.164 clat (usec): min=2892, max=51414, avg=10650.39, stdev=10234.96 00:26:02.164 lat (usec): min=2901, max=51425, avg=10662.24, stdev=10234.95 00:26:02.164 clat percentiles (usec): 00:26:02.164 | 1.00th=[ 3523], 5.00th=[ 5276], 10.00th=[ 6063], 20.00th=[ 6587], 00:26:02.164 | 30.00th=[ 7767], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8717], 00:26:02.164 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9634], 95.00th=[46924], 00:26:02.164 | 99.00th=[50070], 99.50th=[50070], 99.90th=[51119], 99.95th=[51643], 00:26:02.164 | 99.99th=[51643] 00:26:02.164 bw ( KiB/s): min=28416, max=40704, per=32.28%, avg=36147.20, stdev=4202.70, samples=10 00:26:02.164 iops : min= 222, max= 318, avg=282.40, stdev=32.83, samples=10 00:26:02.164 lat (msec) : 4=3.82%, 10=88.20%, 20=1.20%, 50=6.15%, 100=0.64% 00:26:02.164 cpu : usr=93.22%, sys=5.01%, ctx=11, majf=0, minf=0 00:26:02.164 IO depths : 1=3.5%, 2=96.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:02.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.164 issued rwts: total=1415,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.164 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:02.164 00:26:02.164 Run status group 0 (all jobs): 00:26:02.164 READ: bw=109MiB/s (115MB/s), 33.2MiB/s-41.4MiB/s (34.8MB/s-43.4MB/s), io=550MiB (577MB), run=5002-5032msec 00:26:02.164 04:22:03 -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:02.164 04:22:03 -- target/dif.sh@43 -- # local sub 00:26:02.164 04:22:03 -- target/dif.sh@45 -- # for sub in "$@" 00:26:02.164 04:22:03 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:02.164 04:22:03 -- target/dif.sh@36 -- # local sub_id=0 00:26:02.164 04:22:03 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:02.164 04:22:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.164 04:22:03 -- common/autotest_common.sh@10 -- # set +x 00:26:02.164 04:22:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.164 04:22:03 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:02.164 04:22:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.164 04:22:03 -- common/autotest_common.sh@10 -- # set +x 00:26:02.164 04:22:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.165 04:22:03 -- target/dif.sh@109 -- # NULL_DIF=2 00:26:02.165 04:22:03 -- target/dif.sh@109 -- # bs=4k 00:26:02.165 04:22:03 -- target/dif.sh@109 -- # numjobs=8 00:26:02.165 04:22:03 -- target/dif.sh@109 -- # iodepth=16 00:26:02.165 04:22:03 -- target/dif.sh@109 -- # runtime= 00:26:02.165 04:22:03 -- target/dif.sh@109 -- # files=2 00:26:02.165 04:22:03 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:02.165 04:22:03 -- target/dif.sh@28 -- # local sub 00:26:02.165 04:22:03 -- target/dif.sh@30 -- # for sub in "$@" 00:26:02.165 04:22:03 -- target/dif.sh@31 -- # create_subsystem 0 00:26:02.165 04:22:03 -- target/dif.sh@18 -- # local sub_id=0 00:26:02.165 04:22:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:02.165 04:22:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.165 04:22:03 -- common/autotest_common.sh@10 -- # set +x 00:26:02.165 bdev_null0 00:26:02.165 04:22:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.165 04:22:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:02.165 04:22:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.165 04:22:03 -- common/autotest_common.sh@10 -- # set +x 00:26:02.165 04:22:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.165 04:22:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:02.165 04:22:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.165 04:22:03 -- common/autotest_common.sh@10 -- # set +x 00:26:02.165 04:22:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.165 04:22:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:02.165 04:22:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.165 04:22:03 -- common/autotest_common.sh@10 -- # set +x 00:26:02.165 [2024-11-26 04:22:03.792194] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:02.165 04:22:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.165 04:22:03 -- target/dif.sh@30 -- # for sub in "$@" 00:26:02.165 04:22:03 -- target/dif.sh@31 -- # create_subsystem 1 00:26:02.165 04:22:03 -- target/dif.sh@18 -- # local sub_id=1 00:26:02.165 04:22:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:02.165 04:22:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.165 04:22:03 -- common/autotest_common.sh@10 -- # set +x 00:26:02.165 bdev_null1 00:26:02.165 04:22:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.165 04:22:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:02.165 04:22:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.165 04:22:03 -- common/autotest_common.sh@10 -- # set +x 00:26:02.165 04:22:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.165 04:22:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:02.165 04:22:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.165 04:22:03 -- common/autotest_common.sh@10 -- # set +x 00:26:02.165 04:22:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.165 04:22:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:02.165 04:22:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.165 04:22:03 -- common/autotest_common.sh@10 -- # set +x 00:26:02.165 04:22:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.165 04:22:03 -- target/dif.sh@30 -- # for sub in "$@" 00:26:02.165 04:22:03 -- target/dif.sh@31 -- # create_subsystem 2 00:26:02.165 04:22:03 -- target/dif.sh@18 -- # local sub_id=2 00:26:02.165 04:22:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:02.165 04:22:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.165 04:22:03 -- common/autotest_common.sh@10 -- # set +x 00:26:02.165 bdev_null2 00:26:02.165 04:22:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.165 04:22:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:02.165 04:22:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.165 04:22:03 -- common/autotest_common.sh@10 -- # set +x 00:26:02.165 04:22:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.165 04:22:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:02.165 04:22:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.165 04:22:03 -- common/autotest_common.sh@10 -- # set +x 00:26:02.165 04:22:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.165 04:22:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:02.165 04:22:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.165 04:22:03 -- common/autotest_common.sh@10 -- # set +x 00:26:02.165 04:22:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.165 04:22:03 -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:02.165 04:22:03 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:02.165 04:22:03 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:02.165 04:22:03 -- nvmf/common.sh@520 -- # config=() 00:26:02.165 04:22:03 -- nvmf/common.sh@520 -- # local subsystem config 00:26:02.165 04:22:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:02.165 04:22:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:02.165 { 00:26:02.165 "params": { 00:26:02.165 "name": "Nvme$subsystem", 00:26:02.165 "trtype": "$TEST_TRANSPORT", 00:26:02.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:02.165 "adrfam": "ipv4", 00:26:02.165 "trsvcid": "$NVMF_PORT", 00:26:02.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:02.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:02.165 "hdgst": ${hdgst:-false}, 00:26:02.165 "ddgst": ${ddgst:-false} 00:26:02.165 }, 00:26:02.165 "method": "bdev_nvme_attach_controller" 00:26:02.165 } 00:26:02.165 EOF 00:26:02.165 )") 00:26:02.165 04:22:03 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:02.165 04:22:03 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:02.165 04:22:03 -- target/dif.sh@82 -- # gen_fio_conf 00:26:02.165 04:22:03 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:02.165 04:22:03 -- target/dif.sh@54 -- # local file 00:26:02.165 04:22:03 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:02.165 04:22:03 -- target/dif.sh@56 -- # cat 00:26:02.165 04:22:03 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:02.165 04:22:03 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:02.165 04:22:03 -- common/autotest_common.sh@1330 -- # shift 00:26:02.165 04:22:03 -- nvmf/common.sh@542 -- # cat 00:26:02.165 04:22:03 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:02.165 04:22:03 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:02.165 04:22:03 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:02.165 04:22:03 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:02.165 04:22:03 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:02.165 04:22:03 -- target/dif.sh@72 -- # (( file <= files )) 00:26:02.165 04:22:03 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:02.165 04:22:03 -- target/dif.sh@73 -- # cat 00:26:02.165 04:22:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:02.165 04:22:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:02.165 { 00:26:02.165 "params": { 00:26:02.165 "name": "Nvme$subsystem", 00:26:02.165 "trtype": "$TEST_TRANSPORT", 00:26:02.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:02.165 "adrfam": "ipv4", 00:26:02.165 "trsvcid": "$NVMF_PORT", 00:26:02.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:02.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:02.165 "hdgst": ${hdgst:-false}, 00:26:02.165 "ddgst": ${ddgst:-false} 00:26:02.165 }, 00:26:02.165 "method": "bdev_nvme_attach_controller" 00:26:02.165 } 00:26:02.165 EOF 00:26:02.165 )") 00:26:02.165 04:22:03 -- nvmf/common.sh@542 -- # cat 00:26:02.165 04:22:03 -- target/dif.sh@72 -- # (( file++ )) 00:26:02.165 04:22:03 -- target/dif.sh@72 -- # (( file <= files )) 00:26:02.165 04:22:03 -- target/dif.sh@73 -- # cat 00:26:02.165 04:22:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:02.166 04:22:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:02.166 { 00:26:02.166 "params": { 00:26:02.166 "name": "Nvme$subsystem", 00:26:02.166 "trtype": "$TEST_TRANSPORT", 00:26:02.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:02.166 "adrfam": "ipv4", 00:26:02.166 "trsvcid": "$NVMF_PORT", 00:26:02.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:02.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:02.166 "hdgst": ${hdgst:-false}, 00:26:02.166 "ddgst": ${ddgst:-false} 00:26:02.166 }, 00:26:02.166 "method": "bdev_nvme_attach_controller" 00:26:02.166 } 00:26:02.166 EOF 00:26:02.166 )") 00:26:02.166 04:22:03 -- nvmf/common.sh@542 -- # cat 00:26:02.166 04:22:03 -- target/dif.sh@72 -- # (( file++ )) 00:26:02.166 04:22:03 -- target/dif.sh@72 -- # (( file <= files )) 00:26:02.166 04:22:03 -- nvmf/common.sh@544 -- # jq . 00:26:02.166 04:22:03 -- nvmf/common.sh@545 -- # IFS=, 00:26:02.166 04:22:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:02.166 "params": { 00:26:02.166 "name": "Nvme0", 00:26:02.166 "trtype": "tcp", 00:26:02.166 "traddr": "10.0.0.2", 00:26:02.166 "adrfam": "ipv4", 00:26:02.166 "trsvcid": "4420", 00:26:02.166 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:02.166 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:02.166 "hdgst": false, 00:26:02.166 "ddgst": false 00:26:02.166 }, 00:26:02.166 "method": "bdev_nvme_attach_controller" 00:26:02.166 },{ 00:26:02.166 "params": { 00:26:02.166 "name": "Nvme1", 00:26:02.166 "trtype": "tcp", 00:26:02.166 "traddr": "10.0.0.2", 00:26:02.166 "adrfam": "ipv4", 00:26:02.166 "trsvcid": "4420", 00:26:02.166 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:02.166 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:02.166 "hdgst": false, 00:26:02.166 "ddgst": false 00:26:02.166 }, 00:26:02.166 "method": "bdev_nvme_attach_controller" 00:26:02.166 },{ 00:26:02.166 "params": { 00:26:02.166 "name": "Nvme2", 00:26:02.166 "trtype": "tcp", 00:26:02.166 "traddr": "10.0.0.2", 00:26:02.166 "adrfam": "ipv4", 00:26:02.166 "trsvcid": "4420", 00:26:02.166 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:02.166 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:02.166 "hdgst": false, 00:26:02.166 "ddgst": false 00:26:02.166 }, 00:26:02.166 "method": "bdev_nvme_attach_controller" 00:26:02.166 }' 00:26:02.166 04:22:03 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:02.166 04:22:03 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:02.166 04:22:03 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:02.166 04:22:03 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:02.166 04:22:03 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:02.166 04:22:03 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:02.438 04:22:03 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:02.438 04:22:03 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:02.438 04:22:03 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:02.438 04:22:03 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:02.438 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:02.438 ... 00:26:02.438 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:02.438 ... 00:26:02.438 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:02.438 ... 00:26:02.438 fio-3.35 00:26:02.438 Starting 24 threads 00:26:03.005 [2024-11-26 04:22:04.720472] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:03.005 [2024-11-26 04:22:04.720534] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:15.214 00:26:15.214 filename0: (groupid=0, jobs=1): err= 0: pid=102674: Tue Nov 26 04:22:14 2024 00:26:15.214 read: IOPS=264, BW=1057KiB/s (1082kB/s)(10.3MiB/10009msec) 00:26:15.214 slat (usec): min=3, max=8064, avg=16.96, stdev=175.16 00:26:15.214 clat (msec): min=9, max=127, avg=60.43, stdev=19.13 00:26:15.214 lat (msec): min=9, max=127, avg=60.45, stdev=19.13 00:26:15.214 clat percentiles (msec): 00:26:15.214 | 1.00th=[ 27], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 45], 00:26:15.214 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 61], 00:26:15.214 | 70.00th=[ 70], 80.00th=[ 75], 90.00th=[ 87], 95.00th=[ 95], 00:26:15.214 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 122], 99.95th=[ 122], 00:26:15.214 | 99.99th=[ 128] 00:26:15.214 bw ( KiB/s): min= 640, max= 1536, per=4.06%, avg=1053.05, stdev=216.61, samples=20 00:26:15.214 iops : min= 160, max= 384, avg=263.25, stdev=54.15, samples=20 00:26:15.214 lat (msec) : 10=0.23%, 50=34.06%, 100=62.57%, 250=3.14% 00:26:15.214 cpu : usr=41.36%, sys=0.50%, ctx=969, majf=0, minf=9 00:26:15.214 IO depths : 1=1.5%, 2=3.6%, 4=10.7%, 8=72.0%, 16=12.2%, 32=0.0%, >=64=0.0% 00:26:15.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.214 complete : 0=0.0%, 4=90.6%, 8=4.9%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.214 issued rwts: total=2645,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.214 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.214 filename0: (groupid=0, jobs=1): err= 0: pid=102675: Tue Nov 26 04:22:14 2024 00:26:15.214 read: IOPS=249, BW=996KiB/s (1020kB/s)(9968KiB/10006msec) 00:26:15.214 slat (usec): min=6, max=8028, avg=21.76, stdev=216.84 00:26:15.214 clat (msec): min=16, max=141, avg=64.04, stdev=17.05 00:26:15.214 lat (msec): min=16, max=141, avg=64.06, stdev=17.05 00:26:15.214 clat percentiles (msec): 00:26:15.214 | 1.00th=[ 25], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 53], 00:26:15.214 | 30.00th=[ 56], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 64], 00:26:15.214 | 70.00th=[ 70], 80.00th=[ 77], 90.00th=[ 87], 95.00th=[ 95], 00:26:15.214 | 99.00th=[ 116], 99.50th=[ 125], 99.90th=[ 142], 99.95th=[ 142], 00:26:15.214 | 99.99th=[ 142] 00:26:15.214 bw ( KiB/s): min= 696, max= 1408, per=3.82%, avg=990.25, stdev=151.03, samples=20 00:26:15.214 iops : min= 174, max= 352, avg=247.55, stdev=37.74, samples=20 00:26:15.214 lat (msec) : 20=0.64%, 50=15.45%, 100=80.70%, 250=3.21% 00:26:15.214 cpu : usr=41.90%, sys=0.57%, ctx=1320, majf=0, minf=9 00:26:15.214 IO depths : 1=2.6%, 2=5.9%, 4=16.3%, 8=64.8%, 16=10.4%, 32=0.0%, >=64=0.0% 00:26:15.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.214 complete : 0=0.0%, 4=91.7%, 8=3.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.214 issued rwts: total=2492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.214 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.214 filename0: (groupid=0, jobs=1): err= 0: pid=102676: Tue Nov 26 04:22:14 2024 00:26:15.214 read: IOPS=282, BW=1130KiB/s (1157kB/s)(11.0MiB/10014msec) 00:26:15.214 slat (usec): min=6, max=8031, avg=22.93, stdev=270.57 00:26:15.214 clat (msec): min=26, max=122, avg=56.47, stdev=15.64 00:26:15.214 lat (msec): min=26, max=122, avg=56.49, stdev=15.64 00:26:15.214 clat percentiles (msec): 00:26:15.214 | 1.00th=[ 27], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 44], 00:26:15.214 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 58], 00:26:15.214 | 70.00th=[ 63], 80.00th=[ 70], 90.00th=[ 77], 95.00th=[ 85], 00:26:15.214 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 124], 99.95th=[ 124], 00:26:15.214 | 99.99th=[ 124] 00:26:15.214 bw ( KiB/s): min= 944, max= 1408, per=4.33%, avg=1124.70, stdev=133.11, samples=20 00:26:15.214 iops : min= 236, max= 352, avg=281.15, stdev=33.23, samples=20 00:26:15.214 lat (msec) : 50=37.20%, 100=61.14%, 250=1.66% 00:26:15.214 cpu : usr=40.40%, sys=0.62%, ctx=1228, majf=0, minf=9 00:26:15.214 IO depths : 1=1.4%, 2=3.5%, 4=12.2%, 8=70.7%, 16=12.2%, 32=0.0%, >=64=0.0% 00:26:15.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.214 complete : 0=0.0%, 4=90.7%, 8=4.9%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.214 issued rwts: total=2828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.214 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.214 filename0: (groupid=0, jobs=1): err= 0: pid=102677: Tue Nov 26 04:22:14 2024 00:26:15.214 read: IOPS=286, BW=1148KiB/s (1175kB/s)(11.3MiB/10040msec) 00:26:15.214 slat (usec): min=4, max=8020, avg=19.43, stdev=222.04 00:26:15.214 clat (msec): min=8, max=145, avg=55.58, stdev=18.26 00:26:15.214 lat (msec): min=8, max=145, avg=55.60, stdev=18.27 00:26:15.214 clat percentiles (msec): 00:26:15.214 | 1.00th=[ 21], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 40], 00:26:15.214 | 30.00th=[ 46], 40.00th=[ 50], 50.00th=[ 55], 60.00th=[ 60], 00:26:15.214 | 70.00th=[ 63], 80.00th=[ 69], 90.00th=[ 81], 95.00th=[ 85], 00:26:15.214 | 99.00th=[ 116], 99.50th=[ 121], 99.90th=[ 146], 99.95th=[ 146], 00:26:15.214 | 99.99th=[ 146] 00:26:15.214 bw ( KiB/s): min= 640, max= 1408, per=4.42%, avg=1146.10, stdev=204.56, samples=20 00:26:15.214 iops : min= 160, max= 352, avg=286.50, stdev=51.12, samples=20 00:26:15.214 lat (msec) : 10=0.49%, 20=0.38%, 50=40.19%, 100=56.40%, 250=2.53% 00:26:15.214 cpu : usr=37.70%, sys=0.48%, ctx=1121, majf=0, minf=9 00:26:15.214 IO depths : 1=1.1%, 2=2.4%, 4=9.9%, 8=74.3%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:15.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.214 complete : 0=0.0%, 4=89.8%, 8=5.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.214 issued rwts: total=2881,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.214 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.214 filename0: (groupid=0, jobs=1): err= 0: pid=102678: Tue Nov 26 04:22:14 2024 00:26:15.214 read: IOPS=317, BW=1271KiB/s (1302kB/s)(12.5MiB/10045msec) 00:26:15.214 slat (usec): min=3, max=8024, avg=16.78, stdev=185.51 00:26:15.214 clat (usec): min=1466, max=131229, avg=50208.69, stdev=19778.38 00:26:15.214 lat (usec): min=1473, max=131261, avg=50225.47, stdev=19786.53 00:26:15.214 clat percentiles (msec): 00:26:15.214 | 1.00th=[ 3], 5.00th=[ 24], 10.00th=[ 32], 20.00th=[ 36], 00:26:15.214 | 30.00th=[ 40], 40.00th=[ 43], 50.00th=[ 48], 60.00th=[ 55], 00:26:15.214 | 70.00th=[ 60], 80.00th=[ 64], 90.00th=[ 72], 95.00th=[ 85], 00:26:15.214 | 99.00th=[ 116], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:26:15.214 | 99.99th=[ 132] 00:26:15.214 bw ( KiB/s): min= 768, max= 2349, per=4.89%, avg=1269.45, stdev=327.93, samples=20 00:26:15.214 iops : min= 192, max= 587, avg=317.35, stdev=81.94, samples=20 00:26:15.214 lat (msec) : 2=0.28%, 4=1.44%, 10=2.01%, 50=51.47%, 100=43.01% 00:26:15.214 lat (msec) : 250=1.79% 00:26:15.214 cpu : usr=40.24%, sys=0.43%, ctx=1136, majf=0, minf=9 00:26:15.214 IO depths : 1=0.6%, 2=1.2%, 4=7.0%, 8=78.3%, 16=12.8%, 32=0.0%, >=64=0.0% 00:26:15.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.214 complete : 0=0.0%, 4=89.3%, 8=6.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.214 issued rwts: total=3192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.214 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.214 filename0: (groupid=0, jobs=1): err= 0: pid=102679: Tue Nov 26 04:22:14 2024 00:26:15.214 read: IOPS=262, BW=1051KiB/s (1077kB/s)(10.3MiB/10020msec) 00:26:15.214 slat (usec): min=6, max=8037, avg=19.08, stdev=221.06 00:26:15.215 clat (msec): min=14, max=122, avg=60.71, stdev=18.18 00:26:15.215 lat (msec): min=14, max=122, avg=60.73, stdev=18.19 00:26:15.215 clat percentiles (msec): 00:26:15.215 | 1.00th=[ 16], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 47], 00:26:15.215 | 30.00th=[ 53], 40.00th=[ 58], 50.00th=[ 60], 60.00th=[ 62], 00:26:15.215 | 70.00th=[ 70], 80.00th=[ 74], 90.00th=[ 85], 95.00th=[ 94], 00:26:15.215 | 99.00th=[ 106], 99.50th=[ 121], 99.90th=[ 124], 99.95th=[ 124], 00:26:15.215 | 99.99th=[ 124] 00:26:15.215 bw ( KiB/s): min= 808, max= 1712, per=4.04%, avg=1047.20, stdev=191.65, samples=20 00:26:15.215 iops : min= 202, max= 428, avg=261.80, stdev=47.91, samples=20 00:26:15.215 lat (msec) : 20=2.01%, 50=26.61%, 100=69.29%, 250=2.09% 00:26:15.215 cpu : usr=34.46%, sys=0.52%, ctx=1048, majf=0, minf=9 00:26:15.215 IO depths : 1=1.8%, 2=3.9%, 4=12.5%, 8=70.3%, 16=11.5%, 32=0.0%, >=64=0.0% 00:26:15.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.215 complete : 0=0.0%, 4=90.8%, 8=4.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.215 issued rwts: total=2634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.215 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.215 filename0: (groupid=0, jobs=1): err= 0: pid=102680: Tue Nov 26 04:22:14 2024 00:26:15.215 read: IOPS=252, BW=1009KiB/s (1033kB/s)(9.87MiB/10018msec) 00:26:15.215 slat (usec): min=3, max=8028, avg=25.06, stdev=318.12 00:26:15.215 clat (msec): min=20, max=155, avg=63.30, stdev=20.20 00:26:15.215 lat (msec): min=20, max=155, avg=63.33, stdev=20.19 00:26:15.215 clat percentiles (msec): 00:26:15.215 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 48], 00:26:15.215 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 61], 60.00th=[ 67], 00:26:15.215 | 70.00th=[ 72], 80.00th=[ 74], 90.00th=[ 85], 95.00th=[ 96], 00:26:15.215 | 99.00th=[ 130], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:26:15.215 | 99.99th=[ 157] 00:26:15.215 bw ( KiB/s): min= 696, max= 1472, per=3.87%, avg=1004.10, stdev=166.71, samples=20 00:26:15.215 iops : min= 174, max= 368, avg=251.00, stdev=41.66, samples=20 00:26:15.215 lat (msec) : 50=27.55%, 100=68.88%, 250=3.56% 00:26:15.215 cpu : usr=32.80%, sys=0.40%, ctx=842, majf=0, minf=9 00:26:15.215 IO depths : 1=1.3%, 2=3.0%, 4=11.8%, 8=71.8%, 16=12.0%, 32=0.0%, >=64=0.0% 00:26:15.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.215 complete : 0=0.0%, 4=90.5%, 8=4.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.215 issued rwts: total=2526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.215 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.215 filename0: (groupid=0, jobs=1): err= 0: pid=102681: Tue Nov 26 04:22:14 2024 00:26:15.215 read: IOPS=246, BW=985KiB/s (1009kB/s)(9852KiB/10002msec) 00:26:15.215 slat (usec): min=3, max=8023, avg=23.51, stdev=279.46 00:26:15.215 clat (msec): min=5, max=139, avg=64.84, stdev=18.29 00:26:15.215 lat (msec): min=5, max=139, avg=64.86, stdev=18.30 00:26:15.215 clat percentiles (msec): 00:26:15.215 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 50], 00:26:15.215 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 61], 60.00th=[ 70], 00:26:15.215 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 86], 95.00th=[ 96], 00:26:15.215 | 99.00th=[ 116], 99.50th=[ 117], 99.90th=[ 140], 99.95th=[ 140], 00:26:15.215 | 99.99th=[ 140] 00:26:15.215 bw ( KiB/s): min= 696, max= 1104, per=3.69%, avg=957.53, stdev=108.91, samples=19 00:26:15.215 iops : min= 174, max= 276, avg=239.37, stdev=27.24, samples=19 00:26:15.215 lat (msec) : 10=0.65%, 50=20.26%, 100=75.72%, 250=3.37% 00:26:15.215 cpu : usr=32.82%, sys=0.40%, ctx=845, majf=0, minf=9 00:26:15.215 IO depths : 1=1.6%, 2=3.8%, 4=12.3%, 8=70.4%, 16=11.9%, 32=0.0%, >=64=0.0% 00:26:15.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.215 complete : 0=0.0%, 4=90.8%, 8=4.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.215 issued rwts: total=2463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.215 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.215 filename1: (groupid=0, jobs=1): err= 0: pid=102682: Tue Nov 26 04:22:14 2024 00:26:15.215 read: IOPS=253, BW=1014KiB/s (1038kB/s)(9.90MiB/10004msec) 00:26:15.215 slat (usec): min=4, max=8193, avg=24.49, stdev=284.37 00:26:15.215 clat (msec): min=18, max=123, avg=62.99, stdev=17.67 00:26:15.215 lat (msec): min=18, max=124, avg=63.02, stdev=17.68 00:26:15.215 clat percentiles (msec): 00:26:15.215 | 1.00th=[ 25], 5.00th=[ 37], 10.00th=[ 43], 20.00th=[ 49], 00:26:15.215 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 63], 00:26:15.215 | 70.00th=[ 71], 80.00th=[ 78], 90.00th=[ 86], 95.00th=[ 95], 00:26:15.215 | 99.00th=[ 120], 99.50th=[ 125], 99.90th=[ 125], 99.95th=[ 125], 00:26:15.215 | 99.99th=[ 125] 00:26:15.215 bw ( KiB/s): min= 640, max= 1232, per=3.83%, avg=993.16, stdev=164.31, samples=19 00:26:15.215 iops : min= 160, max= 308, avg=248.26, stdev=41.10, samples=19 00:26:15.215 lat (msec) : 20=0.20%, 50=21.18%, 100=74.75%, 250=3.87% 00:26:15.215 cpu : usr=38.51%, sys=0.52%, ctx=1235, majf=0, minf=9 00:26:15.215 IO depths : 1=2.1%, 2=5.0%, 4=15.0%, 8=66.9%, 16=10.9%, 32=0.0%, >=64=0.0% 00:26:15.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.215 complete : 0=0.0%, 4=91.3%, 8=3.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.215 issued rwts: total=2535,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.215 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.215 filename1: (groupid=0, jobs=1): err= 0: pid=102683: Tue Nov 26 04:22:14 2024 00:26:15.215 read: IOPS=313, BW=1254KiB/s (1284kB/s)(12.3MiB/10044msec) 00:26:15.215 slat (usec): min=3, max=8044, avg=19.10, stdev=212.50 00:26:15.215 clat (usec): min=1387, max=119954, avg=50894.29, stdev=19068.68 00:26:15.215 lat (usec): min=1396, max=119973, avg=50913.38, stdev=19071.56 00:26:15.215 clat percentiles (msec): 00:26:15.215 | 1.00th=[ 6], 5.00th=[ 26], 10.00th=[ 32], 20.00th=[ 38], 00:26:15.215 | 30.00th=[ 41], 40.00th=[ 44], 50.00th=[ 48], 60.00th=[ 55], 00:26:15.215 | 70.00th=[ 58], 80.00th=[ 64], 90.00th=[ 74], 95.00th=[ 88], 00:26:15.215 | 99.00th=[ 111], 99.50th=[ 116], 99.90th=[ 121], 99.95th=[ 121], 00:26:15.215 | 99.99th=[ 121] 00:26:15.215 bw ( KiB/s): min= 736, max= 1664, per=4.83%, avg=1252.50, stdev=236.42, samples=20 00:26:15.215 iops : min= 184, max= 416, avg=313.10, stdev=59.06, samples=20 00:26:15.215 lat (msec) : 2=0.51%, 10=1.52%, 20=1.56%, 50=50.65%, 100=43.57% 00:26:15.215 lat (msec) : 250=2.19% 00:26:15.215 cpu : usr=42.36%, sys=0.67%, ctx=1352, majf=0, minf=9 00:26:15.215 IO depths : 1=0.3%, 2=1.0%, 4=6.4%, 8=78.7%, 16=13.6%, 32=0.0%, >=64=0.0% 00:26:15.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.215 complete : 0=0.0%, 4=89.3%, 8=6.4%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.215 issued rwts: total=3149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.215 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.215 filename1: (groupid=0, jobs=1): err= 0: pid=102684: Tue Nov 26 04:22:14 2024 00:26:15.215 read: IOPS=263, BW=1052KiB/s (1078kB/s)(10.3MiB/10024msec) 00:26:15.215 slat (usec): min=3, max=8030, avg=27.60, stdev=340.58 00:26:15.215 clat (msec): min=22, max=142, avg=60.62, stdev=20.43 00:26:15.215 lat (msec): min=22, max=142, avg=60.65, stdev=20.43 00:26:15.215 clat percentiles (msec): 00:26:15.215 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 46], 00:26:15.215 | 30.00th=[ 48], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 61], 00:26:15.215 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 97], 00:26:15.215 | 99.00th=[ 131], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 144], 00:26:15.215 | 99.99th=[ 144] 00:26:15.215 bw ( KiB/s): min= 688, max= 1416, per=4.04%, avg=1048.50, stdev=170.65, samples=20 00:26:15.215 iops : min= 172, max= 354, avg=262.10, stdev=42.63, samples=20 00:26:15.215 lat (msec) : 50=33.26%, 100=62.38%, 250=4.36% 00:26:15.215 cpu : usr=33.45%, sys=0.27%, ctx=873, majf=0, minf=9 00:26:15.215 IO depths : 1=1.0%, 2=2.4%, 4=10.1%, 8=74.0%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:15.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.215 complete : 0=0.0%, 4=89.9%, 8=5.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.215 issued rwts: total=2637,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.215 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.215 filename1: (groupid=0, jobs=1): err= 0: pid=102685: Tue Nov 26 04:22:14 2024 00:26:15.215 read: IOPS=284, BW=1140KiB/s (1167kB/s)(11.1MiB/10017msec) 00:26:15.215 slat (usec): min=4, max=8022, avg=18.22, stdev=192.20 00:26:15.215 clat (msec): min=22, max=127, avg=56.03, stdev=18.41 00:26:15.215 lat (msec): min=22, max=127, avg=56.05, stdev=18.41 00:26:15.215 clat percentiles (msec): 00:26:15.215 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 40], 00:26:15.215 | 30.00th=[ 46], 40.00th=[ 50], 50.00th=[ 54], 60.00th=[ 58], 00:26:15.215 | 70.00th=[ 63], 80.00th=[ 70], 90.00th=[ 83], 95.00th=[ 93], 00:26:15.215 | 99.00th=[ 107], 99.50th=[ 111], 99.90th=[ 128], 99.95th=[ 128], 00:26:15.215 | 99.99th=[ 128] 00:26:15.215 bw ( KiB/s): min= 688, max= 1616, per=4.38%, avg=1137.60, stdev=231.17, samples=20 00:26:15.215 iops : min= 172, max= 404, avg=284.40, stdev=57.79, samples=20 00:26:15.215 lat (msec) : 50=41.87%, 100=55.68%, 250=2.45% 00:26:15.215 cpu : usr=43.24%, sys=0.46%, ctx=1285, majf=0, minf=9 00:26:15.215 IO depths : 1=1.5%, 2=3.2%, 4=10.0%, 8=73.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:26:15.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.215 complete : 0=0.0%, 4=90.1%, 8=5.6%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.215 issued rwts: total=2854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.215 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.215 filename1: (groupid=0, jobs=1): err= 0: pid=102686: Tue Nov 26 04:22:14 2024 00:26:15.215 read: IOPS=266, BW=1068KiB/s (1093kB/s)(10.4MiB/10010msec) 00:26:15.215 slat (usec): min=4, max=8031, avg=23.30, stdev=289.68 00:26:15.215 clat (msec): min=19, max=127, avg=59.79, stdev=19.72 00:26:15.215 lat (msec): min=19, max=127, avg=59.82, stdev=19.73 00:26:15.215 clat percentiles (msec): 00:26:15.215 | 1.00th=[ 29], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 41], 00:26:15.215 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 59], 60.00th=[ 62], 00:26:15.215 | 70.00th=[ 69], 80.00th=[ 75], 90.00th=[ 87], 95.00th=[ 99], 00:26:15.215 | 99.00th=[ 114], 99.50th=[ 115], 99.90th=[ 128], 99.95th=[ 128], 00:26:15.215 | 99.99th=[ 128] 00:26:15.215 bw ( KiB/s): min= 688, max= 1456, per=4.02%, avg=1043.74, stdev=226.18, samples=19 00:26:15.215 iops : min= 172, max= 364, avg=260.89, stdev=56.53, samples=19 00:26:15.216 lat (msec) : 20=0.15%, 50=33.65%, 100=61.68%, 250=4.53% 00:26:15.216 cpu : usr=42.31%, sys=0.55%, ctx=1152, majf=0, minf=9 00:26:15.216 IO depths : 1=1.4%, 2=3.3%, 4=11.0%, 8=71.7%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:15.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.216 complete : 0=0.0%, 4=90.6%, 8=5.1%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.216 issued rwts: total=2672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.216 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.216 filename1: (groupid=0, jobs=1): err= 0: pid=102687: Tue Nov 26 04:22:14 2024 00:26:15.216 read: IOPS=251, BW=1007KiB/s (1031kB/s)(9.85MiB/10022msec) 00:26:15.216 slat (usec): min=4, max=8057, avg=19.55, stdev=229.72 00:26:15.216 clat (msec): min=22, max=165, avg=63.44, stdev=19.71 00:26:15.216 lat (msec): min=22, max=165, avg=63.46, stdev=19.72 00:26:15.216 clat percentiles (msec): 00:26:15.216 | 1.00th=[ 25], 5.00th=[ 37], 10.00th=[ 43], 20.00th=[ 48], 00:26:15.216 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 62], 00:26:15.216 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 89], 95.00th=[ 96], 00:26:15.216 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 165], 99.95th=[ 165], 00:26:15.216 | 99.99th=[ 165] 00:26:15.216 bw ( KiB/s): min= 600, max= 1504, per=3.87%, avg=1003.80, stdev=184.86, samples=20 00:26:15.216 iops : min= 150, max= 376, avg=250.95, stdev=46.21, samples=20 00:26:15.216 lat (msec) : 50=26.72%, 100=68.95%, 250=4.32% 00:26:15.216 cpu : usr=35.37%, sys=0.45%, ctx=972, majf=0, minf=9 00:26:15.216 IO depths : 1=1.0%, 2=2.1%, 4=8.7%, 8=74.9%, 16=13.2%, 32=0.0%, >=64=0.0% 00:26:15.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.216 complete : 0=0.0%, 4=89.7%, 8=6.4%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.216 issued rwts: total=2522,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.216 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.216 filename1: (groupid=0, jobs=1): err= 0: pid=102688: Tue Nov 26 04:22:14 2024 00:26:15.216 read: IOPS=244, BW=978KiB/s (1002kB/s)(9788KiB/10005msec) 00:26:15.216 slat (usec): min=4, max=8031, avg=17.78, stdev=181.53 00:26:15.216 clat (msec): min=6, max=148, avg=65.29, stdev=18.49 00:26:15.216 lat (msec): min=6, max=148, avg=65.31, stdev=18.49 00:26:15.216 clat percentiles (msec): 00:26:15.216 | 1.00th=[ 24], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 52], 00:26:15.216 | 30.00th=[ 58], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 64], 00:26:15.216 | 70.00th=[ 71], 80.00th=[ 83], 90.00th=[ 92], 95.00th=[ 96], 00:26:15.216 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 150], 99.95th=[ 150], 00:26:15.216 | 99.99th=[ 150] 00:26:15.216 bw ( KiB/s): min= 640, max= 1152, per=3.66%, avg=949.47, stdev=133.07, samples=19 00:26:15.216 iops : min= 160, max= 288, avg=237.37, stdev=33.27, samples=19 00:26:15.216 lat (msec) : 10=0.37%, 20=0.29%, 50=16.67%, 100=78.14%, 250=4.54% 00:26:15.216 cpu : usr=37.78%, sys=0.57%, ctx=1022, majf=0, minf=9 00:26:15.216 IO depths : 1=2.4%, 2=5.5%, 4=15.8%, 8=65.6%, 16=10.7%, 32=0.0%, >=64=0.0% 00:26:15.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.216 complete : 0=0.0%, 4=91.4%, 8=3.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.216 issued rwts: total=2447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.216 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.216 filename1: (groupid=0, jobs=1): err= 0: pid=102689: Tue Nov 26 04:22:14 2024 00:26:15.216 read: IOPS=241, BW=965KiB/s (988kB/s)(9652KiB/10004msec) 00:26:15.216 slat (usec): min=4, max=8031, avg=29.98, stdev=364.34 00:26:15.216 clat (msec): min=6, max=143, avg=66.11, stdev=19.83 00:26:15.216 lat (msec): min=6, max=143, avg=66.14, stdev=19.84 00:26:15.216 clat percentiles (msec): 00:26:15.216 | 1.00th=[ 24], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 51], 00:26:15.216 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 68], 00:26:15.216 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 93], 95.00th=[ 105], 00:26:15.216 | 99.00th=[ 123], 99.50th=[ 125], 99.90th=[ 144], 99.95th=[ 144], 00:26:15.216 | 99.99th=[ 144] 00:26:15.216 bw ( KiB/s): min= 640, max= 1152, per=3.62%, avg=939.37, stdev=130.38, samples=19 00:26:15.216 iops : min= 160, max= 288, avg=234.84, stdev=32.59, samples=19 00:26:15.216 lat (msec) : 10=0.62%, 20=0.04%, 50=18.23%, 100=75.47%, 250=5.64% 00:26:15.216 cpu : usr=35.89%, sys=0.58%, ctx=1011, majf=0, minf=9 00:26:15.216 IO depths : 1=2.4%, 2=5.5%, 4=15.5%, 8=66.0%, 16=10.7%, 32=0.0%, >=64=0.0% 00:26:15.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.216 complete : 0=0.0%, 4=91.5%, 8=3.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.216 issued rwts: total=2413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.216 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.216 filename2: (groupid=0, jobs=1): err= 0: pid=102690: Tue Nov 26 04:22:14 2024 00:26:15.216 read: IOPS=307, BW=1230KiB/s (1260kB/s)(12.0MiB/10015msec) 00:26:15.216 slat (usec): min=3, max=10062, avg=20.06, stdev=254.95 00:26:15.216 clat (msec): min=3, max=109, avg=51.90, stdev=17.47 00:26:15.216 lat (msec): min=3, max=109, avg=51.92, stdev=17.47 00:26:15.216 clat percentiles (msec): 00:26:15.216 | 1.00th=[ 7], 5.00th=[ 26], 10.00th=[ 32], 20.00th=[ 37], 00:26:15.216 | 30.00th=[ 44], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 58], 00:26:15.216 | 70.00th=[ 61], 80.00th=[ 67], 90.00th=[ 72], 95.00th=[ 84], 00:26:15.216 | 99.00th=[ 96], 99.50th=[ 108], 99.90th=[ 110], 99.95th=[ 110], 00:26:15.216 | 99.99th=[ 110] 00:26:15.216 bw ( KiB/s): min= 952, max= 1891, per=4.72%, avg=1225.75, stdev=241.34, samples=20 00:26:15.216 iops : min= 238, max= 472, avg=306.40, stdev=60.23, samples=20 00:26:15.216 lat (msec) : 4=0.52%, 10=1.49%, 20=1.23%, 50=45.94%, 100=50.06% 00:26:15.216 lat (msec) : 250=0.75% 00:26:15.216 cpu : usr=37.27%, sys=0.54%, ctx=1089, majf=0, minf=9 00:26:15.216 IO depths : 1=0.8%, 2=1.9%, 4=9.0%, 8=75.6%, 16=12.7%, 32=0.0%, >=64=0.0% 00:26:15.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.216 complete : 0=0.0%, 4=89.7%, 8=5.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.216 issued rwts: total=3080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.216 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.216 filename2: (groupid=0, jobs=1): err= 0: pid=102691: Tue Nov 26 04:22:14 2024 00:26:15.216 read: IOPS=277, BW=1111KiB/s (1137kB/s)(10.9MiB/10025msec) 00:26:15.216 slat (nsec): min=4845, max=81674, avg=12145.16, stdev=7489.19 00:26:15.216 clat (msec): min=21, max=134, avg=57.52, stdev=18.89 00:26:15.216 lat (msec): min=21, max=134, avg=57.53, stdev=18.89 00:26:15.216 clat percentiles (msec): 00:26:15.216 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 40], 00:26:15.216 | 30.00th=[ 47], 40.00th=[ 52], 50.00th=[ 57], 60.00th=[ 61], 00:26:15.216 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 92], 00:26:15.216 | 99.00th=[ 116], 99.50th=[ 118], 99.90th=[ 136], 99.95th=[ 136], 00:26:15.216 | 99.99th=[ 136] 00:26:15.216 bw ( KiB/s): min= 656, max= 1600, per=4.27%, avg=1107.20, stdev=232.64, samples=20 00:26:15.216 iops : min= 164, max= 400, avg=276.80, stdev=58.16, samples=20 00:26:15.216 lat (msec) : 50=39.01%, 100=58.30%, 250=2.69% 00:26:15.216 cpu : usr=38.11%, sys=0.51%, ctx=1309, majf=0, minf=9 00:26:15.216 IO depths : 1=0.3%, 2=0.7%, 4=6.5%, 8=78.6%, 16=13.9%, 32=0.0%, >=64=0.0% 00:26:15.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.216 complete : 0=0.0%, 4=89.1%, 8=7.0%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.216 issued rwts: total=2784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.216 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.216 filename2: (groupid=0, jobs=1): err= 0: pid=102692: Tue Nov 26 04:22:14 2024 00:26:15.216 read: IOPS=263, BW=1054KiB/s (1079kB/s)(10.3MiB/10019msec) 00:26:15.216 slat (usec): min=6, max=8020, avg=16.84, stdev=169.28 00:26:15.216 clat (msec): min=13, max=148, avg=60.63, stdev=21.13 00:26:15.216 lat (msec): min=13, max=148, avg=60.64, stdev=21.13 00:26:15.216 clat percentiles (msec): 00:26:15.216 | 1.00th=[ 17], 5.00th=[ 31], 10.00th=[ 36], 20.00th=[ 44], 00:26:15.216 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 60], 60.00th=[ 62], 00:26:15.216 | 70.00th=[ 70], 80.00th=[ 75], 90.00th=[ 87], 95.00th=[ 100], 00:26:15.216 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 148], 99.95th=[ 148], 00:26:15.216 | 99.99th=[ 148] 00:26:15.216 bw ( KiB/s): min= 640, max= 1840, per=4.05%, avg=1049.05, stdev=256.55, samples=20 00:26:15.216 iops : min= 160, max= 460, avg=262.20, stdev=64.10, samples=20 00:26:15.216 lat (msec) : 20=1.82%, 50=29.33%, 100=63.96%, 250=4.89% 00:26:15.216 cpu : usr=37.00%, sys=0.54%, ctx=1144, majf=0, minf=9 00:26:15.216 IO depths : 1=1.1%, 2=2.9%, 4=11.6%, 8=72.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:26:15.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.216 complete : 0=0.0%, 4=90.4%, 8=4.9%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.216 issued rwts: total=2639,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.216 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.216 filename2: (groupid=0, jobs=1): err= 0: pid=102693: Tue Nov 26 04:22:14 2024 00:26:15.216 read: IOPS=269, BW=1078KiB/s (1104kB/s)(10.5MiB/10010msec) 00:26:15.216 slat (usec): min=6, max=8037, avg=16.13, stdev=172.81 00:26:15.216 clat (msec): min=22, max=130, avg=59.27, stdev=17.66 00:26:15.216 lat (msec): min=22, max=130, avg=59.29, stdev=17.66 00:26:15.216 clat percentiles (msec): 00:26:15.216 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 46], 00:26:15.216 | 30.00th=[ 49], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 61], 00:26:15.216 | 70.00th=[ 67], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 93], 00:26:15.216 | 99.00th=[ 107], 99.50th=[ 122], 99.90th=[ 131], 99.95th=[ 131], 00:26:15.216 | 99.99th=[ 131] 00:26:15.216 bw ( KiB/s): min= 768, max= 1336, per=4.09%, avg=1061.37, stdev=146.67, samples=19 00:26:15.216 iops : min= 192, max= 334, avg=265.32, stdev=36.63, samples=19 00:26:15.216 lat (msec) : 50=32.25%, 100=66.01%, 250=1.74% 00:26:15.216 cpu : usr=37.17%, sys=0.47%, ctx=1115, majf=0, minf=9 00:26:15.216 IO depths : 1=1.6%, 2=3.4%, 4=11.0%, 8=71.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:26:15.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.216 complete : 0=0.0%, 4=90.7%, 8=4.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.216 issued rwts: total=2698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.216 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.216 filename2: (groupid=0, jobs=1): err= 0: pid=102694: Tue Nov 26 04:22:14 2024 00:26:15.216 read: IOPS=269, BW=1076KiB/s (1102kB/s)(10.5MiB/10003msec) 00:26:15.216 slat (usec): min=4, max=8033, avg=27.47, stdev=345.18 00:26:15.217 clat (msec): min=4, max=119, avg=59.29, stdev=18.51 00:26:15.217 lat (msec): min=4, max=119, avg=59.32, stdev=18.52 00:26:15.217 clat percentiles (msec): 00:26:15.217 | 1.00th=[ 14], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 47], 00:26:15.217 | 30.00th=[ 48], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 61], 00:26:15.217 | 70.00th=[ 69], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 96], 00:26:15.217 | 99.00th=[ 116], 99.50th=[ 116], 99.90th=[ 121], 99.95th=[ 121], 00:26:15.217 | 99.99th=[ 121] 00:26:15.217 bw ( KiB/s): min= 856, max= 1464, per=4.05%, avg=1050.95, stdev=170.34, samples=19 00:26:15.217 iops : min= 214, max= 366, avg=262.74, stdev=42.58, samples=19 00:26:15.217 lat (msec) : 10=0.59%, 20=0.59%, 50=35.15%, 100=60.72%, 250=2.94% 00:26:15.217 cpu : usr=32.80%, sys=0.42%, ctx=845, majf=0, minf=9 00:26:15.217 IO depths : 1=1.4%, 2=3.0%, 4=10.9%, 8=72.4%, 16=12.2%, 32=0.0%, >=64=0.0% 00:26:15.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.217 complete : 0=0.0%, 4=90.4%, 8=5.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.217 issued rwts: total=2691,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.217 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.217 filename2: (groupid=0, jobs=1): err= 0: pid=102695: Tue Nov 26 04:22:14 2024 00:26:15.217 read: IOPS=280, BW=1120KiB/s (1147kB/s)(11.0MiB/10014msec) 00:26:15.217 slat (usec): min=6, max=8033, avg=22.55, stdev=293.80 00:26:15.217 clat (msec): min=21, max=148, avg=57.02, stdev=20.50 00:26:15.217 lat (msec): min=21, max=148, avg=57.04, stdev=20.50 00:26:15.217 clat percentiles (msec): 00:26:15.217 | 1.00th=[ 25], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 39], 00:26:15.217 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 55], 60.00th=[ 61], 00:26:15.217 | 70.00th=[ 64], 80.00th=[ 73], 90.00th=[ 85], 95.00th=[ 95], 00:26:15.217 | 99.00th=[ 120], 99.50th=[ 138], 99.90th=[ 148], 99.95th=[ 148], 00:26:15.217 | 99.99th=[ 148] 00:26:15.217 bw ( KiB/s): min= 640, max= 1472, per=4.30%, avg=1115.05, stdev=224.69, samples=20 00:26:15.217 iops : min= 160, max= 368, avg=278.70, stdev=56.17, samples=20 00:26:15.217 lat (msec) : 50=44.40%, 100=52.71%, 250=2.89% 00:26:15.217 cpu : usr=38.18%, sys=0.56%, ctx=1114, majf=0, minf=9 00:26:15.217 IO depths : 1=1.1%, 2=2.5%, 4=9.0%, 8=74.5%, 16=12.9%, 32=0.0%, >=64=0.0% 00:26:15.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.217 complete : 0=0.0%, 4=90.0%, 8=5.8%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.217 issued rwts: total=2804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.217 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.217 filename2: (groupid=0, jobs=1): err= 0: pid=102696: Tue Nov 26 04:22:14 2024 00:26:15.217 read: IOPS=259, BW=1037KiB/s (1062kB/s)(10.1MiB/10014msec) 00:26:15.217 slat (usec): min=4, max=8018, avg=22.77, stdev=222.87 00:26:15.217 clat (msec): min=15, max=126, avg=61.48, stdev=18.03 00:26:15.217 lat (msec): min=15, max=126, avg=61.50, stdev=18.03 00:26:15.217 clat percentiles (msec): 00:26:15.217 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 48], 00:26:15.217 | 30.00th=[ 54], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 63], 00:26:15.217 | 70.00th=[ 67], 80.00th=[ 77], 90.00th=[ 86], 95.00th=[ 96], 00:26:15.217 | 99.00th=[ 113], 99.50th=[ 124], 99.90th=[ 127], 99.95th=[ 127], 00:26:15.217 | 99.99th=[ 127] 00:26:15.217 bw ( KiB/s): min= 640, max= 1664, per=3.98%, avg=1033.15, stdev=205.84, samples=20 00:26:15.217 iops : min= 160, max= 416, avg=258.25, stdev=51.46, samples=20 00:26:15.217 lat (msec) : 20=0.23%, 50=24.64%, 100=72.08%, 250=3.04% 00:26:15.217 cpu : usr=47.58%, sys=0.66%, ctx=1401, majf=0, minf=9 00:26:15.217 IO depths : 1=3.2%, 2=7.0%, 4=17.1%, 8=63.0%, 16=9.7%, 32=0.0%, >=64=0.0% 00:26:15.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.217 complete : 0=0.0%, 4=92.1%, 8=2.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.217 issued rwts: total=2597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.217 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.217 filename2: (groupid=0, jobs=1): err= 0: pid=102697: Tue Nov 26 04:22:14 2024 00:26:15.217 read: IOPS=292, BW=1171KiB/s (1199kB/s)(11.5MiB/10020msec) 00:26:15.217 slat (usec): min=3, max=4032, avg=15.56, stdev=114.08 00:26:15.217 clat (msec): min=22, max=164, avg=54.53, stdev=17.24 00:26:15.217 lat (msec): min=22, max=164, avg=54.54, stdev=17.23 00:26:15.217 clat percentiles (msec): 00:26:15.217 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 40], 00:26:15.217 | 30.00th=[ 45], 40.00th=[ 47], 50.00th=[ 53], 60.00th=[ 58], 00:26:15.217 | 70.00th=[ 62], 80.00th=[ 69], 90.00th=[ 77], 95.00th=[ 83], 00:26:15.217 | 99.00th=[ 105], 99.50th=[ 118], 99.90th=[ 165], 99.95th=[ 165], 00:26:15.217 | 99.99th=[ 165] 00:26:15.217 bw ( KiB/s): min= 776, max= 1552, per=4.51%, avg=1169.20, stdev=191.69, samples=20 00:26:15.217 iops : min= 194, max= 388, avg=292.30, stdev=47.92, samples=20 00:26:15.217 lat (msec) : 50=45.55%, 100=52.57%, 250=1.88% 00:26:15.217 cpu : usr=40.63%, sys=0.45%, ctx=1232, majf=0, minf=9 00:26:15.217 IO depths : 1=0.9%, 2=2.0%, 4=8.8%, 8=75.5%, 16=12.8%, 32=0.0%, >=64=0.0% 00:26:15.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.217 complete : 0=0.0%, 4=89.8%, 8=5.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.217 issued rwts: total=2933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.217 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.217 00:26:15.217 Run status group 0 (all jobs): 00:26:15.217 READ: bw=25.3MiB/s (26.6MB/s), 965KiB/s-1271KiB/s (988kB/s-1302kB/s), io=254MiB (267MB), run=10002-10045msec 00:26:15.217 04:22:15 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:15.217 04:22:15 -- target/dif.sh@43 -- # local sub 00:26:15.217 04:22:15 -- target/dif.sh@45 -- # for sub in "$@" 00:26:15.217 04:22:15 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:15.217 04:22:15 -- target/dif.sh@36 -- # local sub_id=0 00:26:15.217 04:22:15 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:15.217 04:22:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.217 04:22:15 -- common/autotest_common.sh@10 -- # set +x 00:26:15.217 04:22:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.217 04:22:15 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:15.217 04:22:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.217 04:22:15 -- common/autotest_common.sh@10 -- # set +x 00:26:15.217 04:22:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.217 04:22:15 -- target/dif.sh@45 -- # for sub in "$@" 00:26:15.217 04:22:15 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:15.217 04:22:15 -- target/dif.sh@36 -- # local sub_id=1 00:26:15.217 04:22:15 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:15.217 04:22:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.217 04:22:15 -- common/autotest_common.sh@10 -- # set +x 00:26:15.217 04:22:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.217 04:22:15 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:15.217 04:22:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.217 04:22:15 -- common/autotest_common.sh@10 -- # set +x 00:26:15.217 04:22:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.217 04:22:15 -- target/dif.sh@45 -- # for sub in "$@" 00:26:15.217 04:22:15 -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:15.217 04:22:15 -- target/dif.sh@36 -- # local sub_id=2 00:26:15.217 04:22:15 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:15.217 04:22:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.217 04:22:15 -- common/autotest_common.sh@10 -- # set +x 00:26:15.217 04:22:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.217 04:22:15 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:15.217 04:22:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.217 04:22:15 -- common/autotest_common.sh@10 -- # set +x 00:26:15.217 04:22:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.217 04:22:15 -- target/dif.sh@115 -- # NULL_DIF=1 00:26:15.217 04:22:15 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:15.217 04:22:15 -- target/dif.sh@115 -- # numjobs=2 00:26:15.217 04:22:15 -- target/dif.sh@115 -- # iodepth=8 00:26:15.217 04:22:15 -- target/dif.sh@115 -- # runtime=5 00:26:15.217 04:22:15 -- target/dif.sh@115 -- # files=1 00:26:15.217 04:22:15 -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:15.217 04:22:15 -- target/dif.sh@28 -- # local sub 00:26:15.217 04:22:15 -- target/dif.sh@30 -- # for sub in "$@" 00:26:15.217 04:22:15 -- target/dif.sh@31 -- # create_subsystem 0 00:26:15.217 04:22:15 -- target/dif.sh@18 -- # local sub_id=0 00:26:15.217 04:22:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:15.217 04:22:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.217 04:22:15 -- common/autotest_common.sh@10 -- # set +x 00:26:15.217 bdev_null0 00:26:15.217 04:22:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.217 04:22:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:15.217 04:22:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.217 04:22:15 -- common/autotest_common.sh@10 -- # set +x 00:26:15.217 04:22:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.217 04:22:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:15.217 04:22:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.217 04:22:15 -- common/autotest_common.sh@10 -- # set +x 00:26:15.217 04:22:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.217 04:22:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:15.217 04:22:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.217 04:22:15 -- common/autotest_common.sh@10 -- # set +x 00:26:15.217 [2024-11-26 04:22:15.256160] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:15.217 04:22:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.217 04:22:15 -- target/dif.sh@30 -- # for sub in "$@" 00:26:15.217 04:22:15 -- target/dif.sh@31 -- # create_subsystem 1 00:26:15.217 04:22:15 -- target/dif.sh@18 -- # local sub_id=1 00:26:15.217 04:22:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:15.217 04:22:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.217 04:22:15 -- common/autotest_common.sh@10 -- # set +x 00:26:15.217 bdev_null1 00:26:15.217 04:22:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.217 04:22:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:15.217 04:22:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.217 04:22:15 -- common/autotest_common.sh@10 -- # set +x 00:26:15.217 04:22:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.218 04:22:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:15.218 04:22:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.218 04:22:15 -- common/autotest_common.sh@10 -- # set +x 00:26:15.218 04:22:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.218 04:22:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:15.218 04:22:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.218 04:22:15 -- common/autotest_common.sh@10 -- # set +x 00:26:15.218 04:22:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.218 04:22:15 -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:15.218 04:22:15 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:15.218 04:22:15 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:15.218 04:22:15 -- nvmf/common.sh@520 -- # config=() 00:26:15.218 04:22:15 -- nvmf/common.sh@520 -- # local subsystem config 00:26:15.218 04:22:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:15.218 04:22:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:15.218 { 00:26:15.218 "params": { 00:26:15.218 "name": "Nvme$subsystem", 00:26:15.218 "trtype": "$TEST_TRANSPORT", 00:26:15.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.218 "adrfam": "ipv4", 00:26:15.218 "trsvcid": "$NVMF_PORT", 00:26:15.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.218 "hdgst": ${hdgst:-false}, 00:26:15.218 "ddgst": ${ddgst:-false} 00:26:15.218 }, 00:26:15.218 "method": "bdev_nvme_attach_controller" 00:26:15.218 } 00:26:15.218 EOF 00:26:15.218 )") 00:26:15.218 04:22:15 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:15.218 04:22:15 -- target/dif.sh@82 -- # gen_fio_conf 00:26:15.218 04:22:15 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:15.218 04:22:15 -- target/dif.sh@54 -- # local file 00:26:15.218 04:22:15 -- target/dif.sh@56 -- # cat 00:26:15.218 04:22:15 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:15.218 04:22:15 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:15.218 04:22:15 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:15.218 04:22:15 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:15.218 04:22:15 -- nvmf/common.sh@542 -- # cat 00:26:15.218 04:22:15 -- common/autotest_common.sh@1330 -- # shift 00:26:15.218 04:22:15 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:15.218 04:22:15 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:15.218 04:22:15 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:15.218 04:22:15 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:15.218 04:22:15 -- target/dif.sh@72 -- # (( file <= files )) 00:26:15.218 04:22:15 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:15.218 04:22:15 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:15.218 04:22:15 -- target/dif.sh@73 -- # cat 00:26:15.218 04:22:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:15.218 04:22:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:15.218 { 00:26:15.218 "params": { 00:26:15.218 "name": "Nvme$subsystem", 00:26:15.218 "trtype": "$TEST_TRANSPORT", 00:26:15.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.218 "adrfam": "ipv4", 00:26:15.218 "trsvcid": "$NVMF_PORT", 00:26:15.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.218 "hdgst": ${hdgst:-false}, 00:26:15.218 "ddgst": ${ddgst:-false} 00:26:15.218 }, 00:26:15.218 "method": "bdev_nvme_attach_controller" 00:26:15.218 } 00:26:15.218 EOF 00:26:15.218 )") 00:26:15.218 04:22:15 -- nvmf/common.sh@542 -- # cat 00:26:15.218 04:22:15 -- nvmf/common.sh@544 -- # jq . 00:26:15.218 04:22:15 -- target/dif.sh@72 -- # (( file++ )) 00:26:15.218 04:22:15 -- target/dif.sh@72 -- # (( file <= files )) 00:26:15.218 04:22:15 -- nvmf/common.sh@545 -- # IFS=, 00:26:15.218 04:22:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:15.218 "params": { 00:26:15.218 "name": "Nvme0", 00:26:15.218 "trtype": "tcp", 00:26:15.218 "traddr": "10.0.0.2", 00:26:15.218 "adrfam": "ipv4", 00:26:15.218 "trsvcid": "4420", 00:26:15.218 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:15.218 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:15.218 "hdgst": false, 00:26:15.218 "ddgst": false 00:26:15.218 }, 00:26:15.218 "method": "bdev_nvme_attach_controller" 00:26:15.218 },{ 00:26:15.218 "params": { 00:26:15.218 "name": "Nvme1", 00:26:15.218 "trtype": "tcp", 00:26:15.218 "traddr": "10.0.0.2", 00:26:15.218 "adrfam": "ipv4", 00:26:15.218 "trsvcid": "4420", 00:26:15.218 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:15.218 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:15.218 "hdgst": false, 00:26:15.218 "ddgst": false 00:26:15.218 }, 00:26:15.218 "method": "bdev_nvme_attach_controller" 00:26:15.218 }' 00:26:15.218 04:22:15 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:15.218 04:22:15 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:15.218 04:22:15 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:15.218 04:22:15 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:15.218 04:22:15 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:15.218 04:22:15 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:15.218 04:22:15 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:15.218 04:22:15 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:15.218 04:22:15 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:15.218 04:22:15 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:15.218 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:15.218 ... 00:26:15.218 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:15.218 ... 00:26:15.218 fio-3.35 00:26:15.218 Starting 4 threads 00:26:15.218 [2024-11-26 04:22:15.996779] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:15.218 [2024-11-26 04:22:15.996852] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:19.406 00:26:19.406 filename0: (groupid=0, jobs=1): err= 0: pid=102834: Tue Nov 26 04:22:21 2024 00:26:19.406 read: IOPS=2268, BW=17.7MiB/s (18.6MB/s)(88.6MiB/5002msec) 00:26:19.406 slat (nsec): min=6109, max=78673, avg=12326.32, stdev=8271.41 00:26:19.406 clat (usec): min=1186, max=6301, avg=3471.02, stdev=221.19 00:26:19.406 lat (usec): min=1192, max=6324, avg=3483.34, stdev=220.71 00:26:19.406 clat percentiles (usec): 00:26:19.406 | 1.00th=[ 2868], 5.00th=[ 3294], 10.00th=[ 3359], 20.00th=[ 3392], 00:26:19.406 | 30.00th=[ 3425], 40.00th=[ 3425], 50.00th=[ 3458], 60.00th=[ 3490], 00:26:19.406 | 70.00th=[ 3490], 80.00th=[ 3523], 90.00th=[ 3589], 95.00th=[ 3720], 00:26:19.406 | 99.00th=[ 4113], 99.50th=[ 4555], 99.90th=[ 5735], 99.95th=[ 6259], 00:26:19.406 | 99.99th=[ 6259] 00:26:19.406 bw ( KiB/s): min=18011, max=18304, per=24.96%, avg=18134.56, stdev=99.75, samples=9 00:26:19.406 iops : min= 2251, max= 2288, avg=2266.78, stdev=12.53, samples=9 00:26:19.406 lat (msec) : 2=0.28%, 4=98.19%, 10=1.52% 00:26:19.406 cpu : usr=95.88%, sys=3.00%, ctx=3, majf=0, minf=0 00:26:19.406 IO depths : 1=10.0%, 2=22.7%, 4=52.3%, 8=15.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:19.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.406 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.406 issued rwts: total=11347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.406 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:19.406 filename0: (groupid=0, jobs=1): err= 0: pid=102835: Tue Nov 26 04:22:21 2024 00:26:19.406 read: IOPS=2267, BW=17.7MiB/s (18.6MB/s)(88.6MiB/5002msec) 00:26:19.406 slat (usec): min=5, max=100, avg=16.76, stdev= 9.22 00:26:19.406 clat (usec): min=665, max=6425, avg=3448.00, stdev=227.94 00:26:19.406 lat (usec): min=674, max=6431, avg=3464.76, stdev=228.26 00:26:19.406 clat percentiles (usec): 00:26:19.406 | 1.00th=[ 2737], 5.00th=[ 3261], 10.00th=[ 3326], 20.00th=[ 3359], 00:26:19.406 | 30.00th=[ 3392], 40.00th=[ 3425], 50.00th=[ 3425], 60.00th=[ 3458], 00:26:19.406 | 70.00th=[ 3490], 80.00th=[ 3523], 90.00th=[ 3589], 95.00th=[ 3687], 00:26:19.406 | 99.00th=[ 4080], 99.50th=[ 4817], 99.90th=[ 5538], 99.95th=[ 5669], 00:26:19.406 | 99.99th=[ 6128] 00:26:19.406 bw ( KiB/s): min=18032, max=18304, per=24.96%, avg=18133.33, stdev=109.11, samples=9 00:26:19.406 iops : min= 2254, max= 2288, avg=2266.67, stdev=13.64, samples=9 00:26:19.406 lat (usec) : 750=0.01% 00:26:19.406 lat (msec) : 2=0.10%, 4=98.12%, 10=1.77% 00:26:19.406 cpu : usr=95.16%, sys=3.44%, ctx=11, majf=0, minf=9 00:26:19.406 IO depths : 1=6.7%, 2=22.7%, 4=52.3%, 8=18.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:19.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.406 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.406 issued rwts: total=11344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.406 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:19.406 filename1: (groupid=0, jobs=1): err= 0: pid=102836: Tue Nov 26 04:22:21 2024 00:26:19.406 read: IOPS=2277, BW=17.8MiB/s (18.7MB/s)(89.0MiB/5001msec) 00:26:19.406 slat (nsec): min=3868, max=67460, avg=9325.47, stdev=5809.08 00:26:19.406 clat (usec): min=925, max=5770, avg=3465.25, stdev=224.25 00:26:19.406 lat (usec): min=933, max=5777, avg=3474.58, stdev=224.43 00:26:19.406 clat percentiles (usec): 00:26:19.406 | 1.00th=[ 2900], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3392], 00:26:19.406 | 30.00th=[ 3425], 40.00th=[ 3458], 50.00th=[ 3458], 60.00th=[ 3490], 00:26:19.406 | 70.00th=[ 3490], 80.00th=[ 3523], 90.00th=[ 3589], 95.00th=[ 3654], 00:26:19.406 | 99.00th=[ 4015], 99.50th=[ 4146], 99.90th=[ 5080], 99.95th=[ 5276], 00:26:19.406 | 99.99th=[ 5407] 00:26:19.406 bw ( KiB/s): min=18048, max=18432, per=25.09%, avg=18228.78, stdev=127.80, samples=9 00:26:19.406 iops : min= 2256, max= 2304, avg=2278.56, stdev=15.96, samples=9 00:26:19.406 lat (usec) : 1000=0.07% 00:26:19.406 lat (msec) : 2=0.40%, 4=98.46%, 10=1.06% 00:26:19.406 cpu : usr=95.94%, sys=2.98%, ctx=5, majf=0, minf=0 00:26:19.406 IO depths : 1=9.6%, 2=23.2%, 4=51.8%, 8=15.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:19.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.406 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.406 issued rwts: total=11392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.406 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:19.406 filename1: (groupid=0, jobs=1): err= 0: pid=102837: Tue Nov 26 04:22:21 2024 00:26:19.406 read: IOPS=2268, BW=17.7MiB/s (18.6MB/s)(88.6MiB/5001msec) 00:26:19.406 slat (nsec): min=5997, max=93779, avg=17393.52, stdev=9173.54 00:26:19.406 clat (usec): min=1782, max=5179, avg=3441.53, stdev=173.60 00:26:19.406 lat (usec): min=1810, max=5191, avg=3458.92, stdev=174.18 00:26:19.406 clat percentiles (usec): 00:26:19.406 | 1.00th=[ 2900], 5.00th=[ 3261], 10.00th=[ 3326], 20.00th=[ 3359], 00:26:19.406 | 30.00th=[ 3392], 40.00th=[ 3392], 50.00th=[ 3425], 60.00th=[ 3458], 00:26:19.406 | 70.00th=[ 3490], 80.00th=[ 3523], 90.00th=[ 3556], 95.00th=[ 3654], 00:26:19.406 | 99.00th=[ 4113], 99.50th=[ 4293], 99.90th=[ 4686], 99.95th=[ 4883], 00:26:19.406 | 99.99th=[ 5014] 00:26:19.406 bw ( KiB/s): min=18048, max=18304, per=24.96%, avg=18137.33, stdev=103.46, samples=9 00:26:19.406 iops : min= 2256, max= 2288, avg=2267.11, stdev=12.97, samples=9 00:26:19.406 lat (msec) : 2=0.02%, 4=98.44%, 10=1.54% 00:26:19.406 cpu : usr=94.08%, sys=4.26%, ctx=19, majf=0, minf=9 00:26:19.406 IO depths : 1=7.3%, 2=25.0%, 4=50.0%, 8=17.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:19.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.406 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.406 issued rwts: total=11344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.406 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:19.406 00:26:19.406 Run status group 0 (all jobs): 00:26:19.406 READ: bw=71.0MiB/s (74.4MB/s), 17.7MiB/s-17.8MiB/s (18.6MB/s-18.7MB/s), io=355MiB (372MB), run=5001-5002msec 00:26:19.665 04:22:21 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:19.666 04:22:21 -- target/dif.sh@43 -- # local sub 00:26:19.666 04:22:21 -- target/dif.sh@45 -- # for sub in "$@" 00:26:19.666 04:22:21 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:19.666 04:22:21 -- target/dif.sh@36 -- # local sub_id=0 00:26:19.666 04:22:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:19.666 04:22:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.666 04:22:21 -- common/autotest_common.sh@10 -- # set +x 00:26:19.666 04:22:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.666 04:22:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:19.666 04:22:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.666 04:22:21 -- common/autotest_common.sh@10 -- # set +x 00:26:19.666 04:22:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.666 04:22:21 -- target/dif.sh@45 -- # for sub in "$@" 00:26:19.666 04:22:21 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:19.666 04:22:21 -- target/dif.sh@36 -- # local sub_id=1 00:26:19.666 04:22:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:19.666 04:22:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.666 04:22:21 -- common/autotest_common.sh@10 -- # set +x 00:26:19.666 04:22:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.666 04:22:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:19.666 04:22:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.666 04:22:21 -- common/autotest_common.sh@10 -- # set +x 00:26:19.666 04:22:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.666 00:26:19.666 real 0m23.668s 00:26:19.666 user 2m7.830s 00:26:19.666 sys 0m3.412s 00:26:19.666 04:22:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:19.666 04:22:21 -- common/autotest_common.sh@10 -- # set +x 00:26:19.666 ************************************ 00:26:19.666 END TEST fio_dif_rand_params 00:26:19.666 ************************************ 00:26:19.666 04:22:21 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:19.666 04:22:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:19.666 04:22:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:19.666 04:22:21 -- common/autotest_common.sh@10 -- # set +x 00:26:19.925 ************************************ 00:26:19.925 START TEST fio_dif_digest 00:26:19.925 ************************************ 00:26:19.925 04:22:21 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:26:19.925 04:22:21 -- target/dif.sh@123 -- # local NULL_DIF 00:26:19.925 04:22:21 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:19.925 04:22:21 -- target/dif.sh@125 -- # local hdgst ddgst 00:26:19.925 04:22:21 -- target/dif.sh@127 -- # NULL_DIF=3 00:26:19.925 04:22:21 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:19.925 04:22:21 -- target/dif.sh@127 -- # numjobs=3 00:26:19.925 04:22:21 -- target/dif.sh@127 -- # iodepth=3 00:26:19.925 04:22:21 -- target/dif.sh@127 -- # runtime=10 00:26:19.925 04:22:21 -- target/dif.sh@128 -- # hdgst=true 00:26:19.925 04:22:21 -- target/dif.sh@128 -- # ddgst=true 00:26:19.925 04:22:21 -- target/dif.sh@130 -- # create_subsystems 0 00:26:19.925 04:22:21 -- target/dif.sh@28 -- # local sub 00:26:19.925 04:22:21 -- target/dif.sh@30 -- # for sub in "$@" 00:26:19.925 04:22:21 -- target/dif.sh@31 -- # create_subsystem 0 00:26:19.925 04:22:21 -- target/dif.sh@18 -- # local sub_id=0 00:26:19.925 04:22:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:19.925 04:22:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.925 04:22:21 -- common/autotest_common.sh@10 -- # set +x 00:26:19.925 bdev_null0 00:26:19.925 04:22:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.925 04:22:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:19.925 04:22:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.925 04:22:21 -- common/autotest_common.sh@10 -- # set +x 00:26:19.925 04:22:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.925 04:22:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:19.925 04:22:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.926 04:22:21 -- common/autotest_common.sh@10 -- # set +x 00:26:19.926 04:22:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.926 04:22:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:19.926 04:22:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.926 04:22:21 -- common/autotest_common.sh@10 -- # set +x 00:26:19.926 [2024-11-26 04:22:21.464630] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:19.926 04:22:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.926 04:22:21 -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:19.926 04:22:21 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:19.926 04:22:21 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:19.926 04:22:21 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:19.926 04:22:21 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:19.926 04:22:21 -- nvmf/common.sh@520 -- # config=() 00:26:19.926 04:22:21 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:19.926 04:22:21 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:19.926 04:22:21 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:19.926 04:22:21 -- nvmf/common.sh@520 -- # local subsystem config 00:26:19.926 04:22:21 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:19.926 04:22:21 -- common/autotest_common.sh@1330 -- # shift 00:26:19.926 04:22:21 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:19.926 04:22:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:19.926 04:22:21 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:19.926 04:22:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:19.926 { 00:26:19.926 "params": { 00:26:19.926 "name": "Nvme$subsystem", 00:26:19.926 "trtype": "$TEST_TRANSPORT", 00:26:19.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:19.926 "adrfam": "ipv4", 00:26:19.926 "trsvcid": "$NVMF_PORT", 00:26:19.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:19.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:19.926 "hdgst": ${hdgst:-false}, 00:26:19.926 "ddgst": ${ddgst:-false} 00:26:19.926 }, 00:26:19.926 "method": "bdev_nvme_attach_controller" 00:26:19.926 } 00:26:19.926 EOF 00:26:19.926 )") 00:26:19.926 04:22:21 -- target/dif.sh@82 -- # gen_fio_conf 00:26:19.926 04:22:21 -- target/dif.sh@54 -- # local file 00:26:19.926 04:22:21 -- target/dif.sh@56 -- # cat 00:26:19.926 04:22:21 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:19.926 04:22:21 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:19.926 04:22:21 -- nvmf/common.sh@542 -- # cat 00:26:19.926 04:22:21 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:19.926 04:22:21 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:19.926 04:22:21 -- nvmf/common.sh@544 -- # jq . 00:26:19.926 04:22:21 -- target/dif.sh@72 -- # (( file <= files )) 00:26:19.926 04:22:21 -- nvmf/common.sh@545 -- # IFS=, 00:26:19.926 04:22:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:19.926 "params": { 00:26:19.926 "name": "Nvme0", 00:26:19.926 "trtype": "tcp", 00:26:19.926 "traddr": "10.0.0.2", 00:26:19.926 "adrfam": "ipv4", 00:26:19.926 "trsvcid": "4420", 00:26:19.926 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:19.926 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:19.926 "hdgst": true, 00:26:19.926 "ddgst": true 00:26:19.926 }, 00:26:19.926 "method": "bdev_nvme_attach_controller" 00:26:19.926 }' 00:26:19.926 04:22:21 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:19.926 04:22:21 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:19.926 04:22:21 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:19.926 04:22:21 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:19.926 04:22:21 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:19.926 04:22:21 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:19.926 04:22:21 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:19.926 04:22:21 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:19.926 04:22:21 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:19.926 04:22:21 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:19.926 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:19.926 ... 00:26:19.926 fio-3.35 00:26:19.926 Starting 3 threads 00:26:20.494 [2024-11-26 04:22:22.074948] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:20.494 [2024-11-26 04:22:22.075023] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:32.704 00:26:32.704 filename0: (groupid=0, jobs=1): err= 0: pid=102943: Tue Nov 26 04:22:32 2024 00:26:32.704 read: IOPS=251, BW=31.4MiB/s (33.0MB/s)(315MiB/10005msec) 00:26:32.704 slat (nsec): min=3505, max=59518, avg=13363.34, stdev=5544.44 00:26:32.704 clat (usec): min=7469, max=52809, avg=11912.94, stdev=9272.77 00:26:32.704 lat (usec): min=7480, max=52827, avg=11926.31, stdev=9272.67 00:26:32.704 clat percentiles (usec): 00:26:32.704 | 1.00th=[ 8225], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9110], 00:26:32.704 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:26:32.704 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10814], 95.00th=[49021], 00:26:32.704 | 99.00th=[51119], 99.50th=[51643], 99.90th=[52167], 99.95th=[52167], 00:26:32.704 | 99.99th=[52691] 00:26:32.704 bw ( KiB/s): min=18688, max=39936, per=32.58%, avg=32390.74, stdev=5595.27, samples=19 00:26:32.704 iops : min= 146, max= 312, avg=253.05, stdev=43.71, samples=19 00:26:32.704 lat (msec) : 10=65.74%, 20=28.78%, 50=1.99%, 100=3.50% 00:26:32.705 cpu : usr=94.32%, sys=4.19%, ctx=12, majf=0, minf=9 00:26:32.705 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:32.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.705 issued rwts: total=2516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.705 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:32.705 filename0: (groupid=0, jobs=1): err= 0: pid=102944: Tue Nov 26 04:22:32 2024 00:26:32.705 read: IOPS=282, BW=35.3MiB/s (37.1MB/s)(354MiB/10004msec) 00:26:32.705 slat (nsec): min=6354, max=71301, avg=16727.48, stdev=7000.45 00:26:32.705 clat (usec): min=5913, max=15225, avg=10590.41, stdev=2165.35 00:26:32.705 lat (usec): min=5923, max=15245, avg=10607.14, stdev=2165.85 00:26:32.705 clat percentiles (usec): 00:26:32.705 | 1.00th=[ 6390], 5.00th=[ 6783], 10.00th=[ 7111], 20.00th=[ 7570], 00:26:32.705 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11338], 60.00th=[11731], 00:26:32.705 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12780], 95.00th=[13173], 00:26:32.705 | 99.00th=[13960], 99.50th=[14091], 99.90th=[14746], 99.95th=[15008], 00:26:32.705 | 99.99th=[15270] 00:26:32.705 bw ( KiB/s): min=31744, max=43264, per=36.30%, avg=36082.53, stdev=3057.12, samples=19 00:26:32.705 iops : min= 248, max= 338, avg=281.89, stdev=23.88, samples=19 00:26:32.705 lat (msec) : 10=28.64%, 20=71.36% 00:26:32.705 cpu : usr=95.15%, sys=3.47%, ctx=5, majf=0, minf=9 00:26:32.705 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:32.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.705 issued rwts: total=2828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.705 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:32.705 filename0: (groupid=0, jobs=1): err= 0: pid=102945: Tue Nov 26 04:22:32 2024 00:26:32.705 read: IOPS=244, BW=30.6MiB/s (32.1MB/s)(307MiB/10044msec) 00:26:32.705 slat (nsec): min=8970, max=98336, avg=15595.96, stdev=6486.45 00:26:32.705 clat (usec): min=7148, max=52531, avg=12233.28, stdev=2573.67 00:26:32.705 lat (usec): min=7169, max=52543, avg=12248.88, stdev=2573.31 00:26:32.705 clat percentiles (usec): 00:26:32.705 | 1.00th=[ 7767], 5.00th=[ 7963], 10.00th=[ 8225], 20.00th=[ 8848], 00:26:32.705 | 30.00th=[12256], 40.00th=[13042], 50.00th=[13304], 60.00th=[13435], 00:26:32.705 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14353], 95.00th=[14615], 00:26:32.705 | 99.00th=[15270], 99.50th=[15533], 99.90th=[16581], 99.95th=[44827], 00:26:32.705 | 99.99th=[52691] 00:26:32.705 bw ( KiB/s): min=27904, max=37632, per=31.60%, avg=31414.10, stdev=2564.25, samples=20 00:26:32.705 iops : min= 218, max= 294, avg=245.40, stdev=20.05, samples=20 00:26:32.705 lat (msec) : 10=25.53%, 20=74.39%, 50=0.04%, 100=0.04% 00:26:32.705 cpu : usr=94.04%, sys=4.40%, ctx=37, majf=0, minf=9 00:26:32.705 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:32.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.705 issued rwts: total=2456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.705 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:32.705 00:26:32.705 Run status group 0 (all jobs): 00:26:32.705 READ: bw=97.1MiB/s (102MB/s), 30.6MiB/s-35.3MiB/s (32.1MB/s-37.1MB/s), io=975MiB (1022MB), run=10004-10044msec 00:26:32.705 04:22:32 -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:32.705 04:22:32 -- target/dif.sh@43 -- # local sub 00:26:32.705 04:22:32 -- target/dif.sh@45 -- # for sub in "$@" 00:26:32.705 04:22:32 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:32.705 04:22:32 -- target/dif.sh@36 -- # local sub_id=0 00:26:32.705 04:22:32 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:32.705 04:22:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.705 04:22:32 -- common/autotest_common.sh@10 -- # set +x 00:26:32.705 04:22:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.705 04:22:32 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:32.705 04:22:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.705 04:22:32 -- common/autotest_common.sh@10 -- # set +x 00:26:32.705 04:22:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.705 00:26:32.705 real 0m11.034s 00:26:32.705 user 0m29.069s 00:26:32.705 sys 0m1.488s 00:26:32.705 04:22:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:32.705 ************************************ 00:26:32.705 04:22:32 -- common/autotest_common.sh@10 -- # set +x 00:26:32.705 END TEST fio_dif_digest 00:26:32.705 ************************************ 00:26:32.705 04:22:32 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:32.705 04:22:32 -- target/dif.sh@147 -- # nvmftestfini 00:26:32.705 04:22:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:32.705 04:22:32 -- nvmf/common.sh@116 -- # sync 00:26:32.705 04:22:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:32.705 04:22:32 -- nvmf/common.sh@119 -- # set +e 00:26:32.705 04:22:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:32.705 04:22:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:32.705 rmmod nvme_tcp 00:26:32.705 rmmod nvme_fabrics 00:26:32.705 rmmod nvme_keyring 00:26:32.705 04:22:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:32.705 04:22:32 -- nvmf/common.sh@123 -- # set -e 00:26:32.705 04:22:32 -- nvmf/common.sh@124 -- # return 0 00:26:32.705 04:22:32 -- nvmf/common.sh@477 -- # '[' -n 102169 ']' 00:26:32.705 04:22:32 -- nvmf/common.sh@478 -- # killprocess 102169 00:26:32.705 04:22:32 -- common/autotest_common.sh@936 -- # '[' -z 102169 ']' 00:26:32.705 04:22:32 -- common/autotest_common.sh@940 -- # kill -0 102169 00:26:32.705 04:22:32 -- common/autotest_common.sh@941 -- # uname 00:26:32.705 04:22:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:32.705 04:22:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 102169 00:26:32.705 04:22:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:32.705 04:22:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:32.705 killing process with pid 102169 00:26:32.705 04:22:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 102169' 00:26:32.705 04:22:32 -- common/autotest_common.sh@955 -- # kill 102169 00:26:32.705 04:22:32 -- common/autotest_common.sh@960 -- # wait 102169 00:26:32.705 04:22:32 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:32.705 04:22:32 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:32.705 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:32.705 Waiting for block devices as requested 00:26:32.705 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:32.705 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:32.705 04:22:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:32.705 04:22:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:32.705 04:22:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:32.705 04:22:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:32.705 04:22:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.705 04:22:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:32.705 04:22:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.705 04:22:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:32.705 00:26:32.705 real 1m0.239s 00:26:32.705 user 3m52.130s 00:26:32.705 sys 0m13.998s 00:26:32.705 04:22:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:32.705 04:22:33 -- common/autotest_common.sh@10 -- # set +x 00:26:32.705 ************************************ 00:26:32.705 END TEST nvmf_dif 00:26:32.705 ************************************ 00:26:32.705 04:22:33 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:32.705 04:22:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:32.705 04:22:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:32.705 04:22:33 -- common/autotest_common.sh@10 -- # set +x 00:26:32.705 ************************************ 00:26:32.705 START TEST nvmf_abort_qd_sizes 00:26:32.705 ************************************ 00:26:32.705 04:22:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:32.705 * Looking for test storage... 00:26:32.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:32.705 04:22:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:32.705 04:22:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:32.705 04:22:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:32.705 04:22:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:32.706 04:22:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:32.706 04:22:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:32.706 04:22:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:32.706 04:22:33 -- scripts/common.sh@335 -- # IFS=.-: 00:26:32.706 04:22:33 -- scripts/common.sh@335 -- # read -ra ver1 00:26:32.706 04:22:33 -- scripts/common.sh@336 -- # IFS=.-: 00:26:32.706 04:22:33 -- scripts/common.sh@336 -- # read -ra ver2 00:26:32.706 04:22:33 -- scripts/common.sh@337 -- # local 'op=<' 00:26:32.706 04:22:33 -- scripts/common.sh@339 -- # ver1_l=2 00:26:32.706 04:22:33 -- scripts/common.sh@340 -- # ver2_l=1 00:26:32.706 04:22:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:32.706 04:22:33 -- scripts/common.sh@343 -- # case "$op" in 00:26:32.706 04:22:33 -- scripts/common.sh@344 -- # : 1 00:26:32.706 04:22:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:32.706 04:22:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:32.706 04:22:33 -- scripts/common.sh@364 -- # decimal 1 00:26:32.706 04:22:33 -- scripts/common.sh@352 -- # local d=1 00:26:32.706 04:22:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:32.706 04:22:33 -- scripts/common.sh@354 -- # echo 1 00:26:32.706 04:22:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:32.706 04:22:33 -- scripts/common.sh@365 -- # decimal 2 00:26:32.706 04:22:33 -- scripts/common.sh@352 -- # local d=2 00:26:32.706 04:22:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:32.706 04:22:33 -- scripts/common.sh@354 -- # echo 2 00:26:32.706 04:22:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:32.706 04:22:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:32.706 04:22:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:32.706 04:22:33 -- scripts/common.sh@367 -- # return 0 00:26:32.706 04:22:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:32.706 04:22:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:32.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.706 --rc genhtml_branch_coverage=1 00:26:32.706 --rc genhtml_function_coverage=1 00:26:32.706 --rc genhtml_legend=1 00:26:32.706 --rc geninfo_all_blocks=1 00:26:32.706 --rc geninfo_unexecuted_blocks=1 00:26:32.706 00:26:32.706 ' 00:26:32.706 04:22:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:32.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.706 --rc genhtml_branch_coverage=1 00:26:32.706 --rc genhtml_function_coverage=1 00:26:32.706 --rc genhtml_legend=1 00:26:32.706 --rc geninfo_all_blocks=1 00:26:32.706 --rc geninfo_unexecuted_blocks=1 00:26:32.706 00:26:32.706 ' 00:26:32.706 04:22:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:32.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.706 --rc genhtml_branch_coverage=1 00:26:32.706 --rc genhtml_function_coverage=1 00:26:32.706 --rc genhtml_legend=1 00:26:32.706 --rc geninfo_all_blocks=1 00:26:32.706 --rc geninfo_unexecuted_blocks=1 00:26:32.706 00:26:32.706 ' 00:26:32.706 04:22:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:32.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.706 --rc genhtml_branch_coverage=1 00:26:32.706 --rc genhtml_function_coverage=1 00:26:32.706 --rc genhtml_legend=1 00:26:32.706 --rc geninfo_all_blocks=1 00:26:32.706 --rc geninfo_unexecuted_blocks=1 00:26:32.706 00:26:32.706 ' 00:26:32.706 04:22:33 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:32.706 04:22:33 -- nvmf/common.sh@7 -- # uname -s 00:26:32.706 04:22:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:32.706 04:22:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:32.706 04:22:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:32.706 04:22:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:32.706 04:22:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:32.706 04:22:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:32.706 04:22:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:32.706 04:22:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:32.706 04:22:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:32.706 04:22:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:32.706 04:22:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:26:32.706 04:22:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=06ec455a-b4fb-4b84-9639-fe47bb8d4157 00:26:32.706 04:22:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:32.706 04:22:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:32.706 04:22:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:32.706 04:22:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:32.706 04:22:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:32.706 04:22:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:32.706 04:22:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:32.706 04:22:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.706 04:22:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.706 04:22:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.706 04:22:33 -- paths/export.sh@5 -- # export PATH 00:26:32.706 04:22:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.706 04:22:33 -- nvmf/common.sh@46 -- # : 0 00:26:32.706 04:22:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:32.706 04:22:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:32.706 04:22:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:32.706 04:22:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:32.706 04:22:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:32.706 04:22:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:32.706 04:22:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:32.706 04:22:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:32.706 04:22:33 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:26:32.706 04:22:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:32.706 04:22:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:32.706 04:22:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:32.706 04:22:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:32.706 04:22:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:32.706 04:22:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.706 04:22:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:32.706 04:22:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.706 04:22:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:32.707 04:22:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:32.707 04:22:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:32.707 04:22:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:32.707 04:22:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:32.707 04:22:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:32.707 04:22:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:32.707 04:22:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:32.707 04:22:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:32.707 04:22:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:32.707 04:22:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:32.707 04:22:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:32.707 04:22:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:32.707 04:22:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:32.707 04:22:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:32.707 04:22:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:32.707 04:22:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:32.707 04:22:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:32.707 04:22:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:32.707 04:22:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:32.707 Cannot find device "nvmf_tgt_br" 00:26:32.707 04:22:33 -- nvmf/common.sh@154 -- # true 00:26:32.707 04:22:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:32.707 Cannot find device "nvmf_tgt_br2" 00:26:32.707 04:22:33 -- nvmf/common.sh@155 -- # true 00:26:32.707 04:22:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:32.707 04:22:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:32.707 Cannot find device "nvmf_tgt_br" 00:26:32.707 04:22:33 -- nvmf/common.sh@157 -- # true 00:26:32.707 04:22:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:32.707 Cannot find device "nvmf_tgt_br2" 00:26:32.707 04:22:33 -- nvmf/common.sh@158 -- # true 00:26:32.707 04:22:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:32.707 04:22:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:32.707 04:22:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:32.707 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:32.707 04:22:33 -- nvmf/common.sh@161 -- # true 00:26:32.707 04:22:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:32.707 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:32.707 04:22:33 -- nvmf/common.sh@162 -- # true 00:26:32.707 04:22:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:32.707 04:22:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:32.707 04:22:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:32.707 04:22:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:32.707 04:22:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:32.707 04:22:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:32.707 04:22:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:32.707 04:22:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:32.707 04:22:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:32.707 04:22:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:32.707 04:22:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:32.707 04:22:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:32.707 04:22:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:32.707 04:22:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:32.707 04:22:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:32.707 04:22:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:32.707 04:22:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:32.707 04:22:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:32.707 04:22:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:32.707 04:22:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:32.707 04:22:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:32.707 04:22:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:32.707 04:22:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:32.707 04:22:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:32.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:32.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:26:32.707 00:26:32.707 --- 10.0.0.2 ping statistics --- 00:26:32.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.707 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:26:32.707 04:22:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:32.707 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:32.707 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:26:32.707 00:26:32.707 --- 10.0.0.3 ping statistics --- 00:26:32.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.707 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:26:32.707 04:22:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:32.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:32.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:26:32.707 00:26:32.707 --- 10.0.0.1 ping statistics --- 00:26:32.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.707 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:26:32.707 04:22:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:32.707 04:22:34 -- nvmf/common.sh@421 -- # return 0 00:26:32.707 04:22:34 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:32.707 04:22:34 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:32.967 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:33.227 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:33.227 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:26:33.227 04:22:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:33.227 04:22:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:33.227 04:22:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:33.227 04:22:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:33.227 04:22:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:33.227 04:22:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:33.227 04:22:34 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:26:33.227 04:22:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:33.227 04:22:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:33.227 04:22:34 -- common/autotest_common.sh@10 -- # set +x 00:26:33.227 04:22:34 -- nvmf/common.sh@469 -- # nvmfpid=103547 00:26:33.227 04:22:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:33.227 04:22:34 -- nvmf/common.sh@470 -- # waitforlisten 103547 00:26:33.227 04:22:34 -- common/autotest_common.sh@829 -- # '[' -z 103547 ']' 00:26:33.227 04:22:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.227 04:22:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:33.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.227 04:22:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.227 04:22:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:33.227 04:22:34 -- common/autotest_common.sh@10 -- # set +x 00:26:33.486 [2024-11-26 04:22:35.011432] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:33.486 [2024-11-26 04:22:35.011526] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:33.486 [2024-11-26 04:22:35.155678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:33.486 [2024-11-26 04:22:35.243158] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:33.486 [2024-11-26 04:22:35.243367] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:33.486 [2024-11-26 04:22:35.243385] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:33.486 [2024-11-26 04:22:35.243397] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:33.486 [2024-11-26 04:22:35.243890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.486 [2024-11-26 04:22:35.243991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:33.486 [2024-11-26 04:22:35.244652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:33.486 [2024-11-26 04:22:35.244699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.423 04:22:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:34.423 04:22:35 -- common/autotest_common.sh@862 -- # return 0 00:26:34.423 04:22:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:34.423 04:22:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:34.423 04:22:35 -- common/autotest_common.sh@10 -- # set +x 00:26:34.423 04:22:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:34.423 04:22:35 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:34.423 04:22:35 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:26:34.423 04:22:35 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:26:34.423 04:22:35 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:34.423 04:22:35 -- scripts/common.sh@312 -- # local nvmes 00:26:34.423 04:22:35 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:34.423 04:22:35 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:34.423 04:22:35 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:34.423 04:22:35 -- scripts/common.sh@297 -- # local bdf= 00:26:34.423 04:22:35 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:34.423 04:22:35 -- scripts/common.sh@232 -- # local class 00:26:34.423 04:22:35 -- scripts/common.sh@233 -- # local subclass 00:26:34.423 04:22:35 -- scripts/common.sh@234 -- # local progif 00:26:34.423 04:22:35 -- scripts/common.sh@235 -- # printf %02x 1 00:26:34.423 04:22:35 -- scripts/common.sh@235 -- # class=01 00:26:34.423 04:22:35 -- scripts/common.sh@236 -- # printf %02x 8 00:26:34.423 04:22:35 -- scripts/common.sh@236 -- # subclass=08 00:26:34.423 04:22:35 -- scripts/common.sh@237 -- # printf %02x 2 00:26:34.423 04:22:35 -- scripts/common.sh@237 -- # progif=02 00:26:34.423 04:22:35 -- scripts/common.sh@239 -- # hash lspci 00:26:34.423 04:22:35 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:34.423 04:22:35 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:34.423 04:22:35 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:34.423 04:22:35 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:34.423 04:22:35 -- scripts/common.sh@244 -- # tr -d '"' 00:26:34.423 04:22:35 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:34.423 04:22:35 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:34.423 04:22:35 -- scripts/common.sh@15 -- # local i 00:26:34.423 04:22:35 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:34.423 04:22:35 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:34.423 04:22:35 -- scripts/common.sh@24 -- # return 0 00:26:34.423 04:22:35 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:34.423 04:22:35 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:34.423 04:22:35 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:26:34.423 04:22:35 -- scripts/common.sh@15 -- # local i 00:26:34.423 04:22:35 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:26:34.423 04:22:36 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:34.423 04:22:36 -- scripts/common.sh@24 -- # return 0 00:26:34.423 04:22:36 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:26:34.423 04:22:36 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:34.423 04:22:36 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:34.423 04:22:36 -- scripts/common.sh@322 -- # uname -s 00:26:34.423 04:22:36 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:34.423 04:22:36 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:34.423 04:22:36 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:34.423 04:22:36 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:26:34.423 04:22:36 -- scripts/common.sh@322 -- # uname -s 00:26:34.423 04:22:36 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:34.423 04:22:36 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:34.423 04:22:36 -- scripts/common.sh@327 -- # (( 2 )) 00:26:34.423 04:22:36 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:26:34.423 04:22:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:34.423 04:22:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:34.423 04:22:36 -- common/autotest_common.sh@10 -- # set +x 00:26:34.423 ************************************ 00:26:34.423 START TEST spdk_target_abort 00:26:34.423 ************************************ 00:26:34.423 04:22:36 -- common/autotest_common.sh@1114 -- # spdk_target 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:26:34.423 04:22:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.423 04:22:36 -- common/autotest_common.sh@10 -- # set +x 00:26:34.423 spdk_targetn1 00:26:34.423 04:22:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:34.423 04:22:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.423 04:22:36 -- common/autotest_common.sh@10 -- # set +x 00:26:34.423 [2024-11-26 04:22:36.105916] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:34.423 04:22:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:26:34.423 04:22:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.423 04:22:36 -- common/autotest_common.sh@10 -- # set +x 00:26:34.423 04:22:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:26:34.423 04:22:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.423 04:22:36 -- common/autotest_common.sh@10 -- # set +x 00:26:34.423 04:22:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:26:34.423 04:22:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.423 04:22:36 -- common/autotest_common.sh@10 -- # set +x 00:26:34.423 [2024-11-26 04:22:36.134197] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:34.423 04:22:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:34.423 04:22:36 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:34.424 04:22:36 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:37.710 Initializing NVMe Controllers 00:26:37.710 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:37.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:37.710 Initialization complete. Launching workers. 00:26:37.710 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10126, failed: 0 00:26:37.710 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1161, failed to submit 8965 00:26:37.710 success 706, unsuccess 455, failed 0 00:26:37.710 04:22:39 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:37.710 04:22:39 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:40.997 Initializing NVMe Controllers 00:26:40.997 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:40.997 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:40.997 Initialization complete. Launching workers. 00:26:40.997 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5993, failed: 0 00:26:40.997 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1232, failed to submit 4761 00:26:40.997 success 319, unsuccess 913, failed 0 00:26:40.997 04:22:42 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:40.998 04:22:42 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:44.288 Initializing NVMe Controllers 00:26:44.288 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:44.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:44.288 Initialization complete. Launching workers. 00:26:44.288 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31731, failed: 0 00:26:44.288 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2637, failed to submit 29094 00:26:44.288 success 521, unsuccess 2116, failed 0 00:26:44.288 04:22:45 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:26:44.288 04:22:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.288 04:22:45 -- common/autotest_common.sh@10 -- # set +x 00:26:44.288 04:22:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.288 04:22:45 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:44.288 04:22:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.288 04:22:45 -- common/autotest_common.sh@10 -- # set +x 00:26:44.856 04:22:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.856 04:22:46 -- target/abort_qd_sizes.sh@62 -- # killprocess 103547 00:26:44.856 04:22:46 -- common/autotest_common.sh@936 -- # '[' -z 103547 ']' 00:26:44.856 04:22:46 -- common/autotest_common.sh@940 -- # kill -0 103547 00:26:44.856 04:22:46 -- common/autotest_common.sh@941 -- # uname 00:26:44.856 04:22:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:44.856 04:22:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103547 00:26:44.856 killing process with pid 103547 00:26:44.856 04:22:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:44.856 04:22:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:44.856 04:22:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 103547' 00:26:44.856 04:22:46 -- common/autotest_common.sh@955 -- # kill 103547 00:26:44.856 04:22:46 -- common/autotest_common.sh@960 -- # wait 103547 00:26:45.114 00:26:45.114 real 0m10.624s 00:26:45.114 user 0m42.840s 00:26:45.114 sys 0m1.757s 00:26:45.114 04:22:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:45.114 ************************************ 00:26:45.114 END TEST spdk_target_abort 00:26:45.114 ************************************ 00:26:45.114 04:22:46 -- common/autotest_common.sh@10 -- # set +x 00:26:45.114 04:22:46 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:26:45.114 04:22:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:45.114 04:22:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:45.114 04:22:46 -- common/autotest_common.sh@10 -- # set +x 00:26:45.114 ************************************ 00:26:45.114 START TEST kernel_target_abort 00:26:45.114 ************************************ 00:26:45.114 04:22:46 -- common/autotest_common.sh@1114 -- # kernel_target 00:26:45.114 04:22:46 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:26:45.114 04:22:46 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:26:45.114 04:22:46 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:26:45.114 04:22:46 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:26:45.114 04:22:46 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:26:45.114 04:22:46 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:45.114 04:22:46 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:45.114 04:22:46 -- nvmf/common.sh@627 -- # local block nvme 00:26:45.114 04:22:46 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:26:45.114 04:22:46 -- nvmf/common.sh@630 -- # modprobe nvmet 00:26:45.114 04:22:46 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:45.114 04:22:46 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:45.372 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:45.372 Waiting for block devices as requested 00:26:45.630 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:45.630 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:45.630 04:22:47 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:45.630 04:22:47 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:45.630 04:22:47 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:26:45.630 04:22:47 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:26:45.630 04:22:47 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:45.630 No valid GPT data, bailing 00:26:45.630 04:22:47 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:45.630 04:22:47 -- scripts/common.sh@393 -- # pt= 00:26:45.630 04:22:47 -- scripts/common.sh@394 -- # return 1 00:26:45.630 04:22:47 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:26:45.630 04:22:47 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:45.630 04:22:47 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:45.630 04:22:47 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:26:45.630 04:22:47 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:26:45.630 04:22:47 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:45.889 No valid GPT data, bailing 00:26:45.890 04:22:47 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:45.890 04:22:47 -- scripts/common.sh@393 -- # pt= 00:26:45.890 04:22:47 -- scripts/common.sh@394 -- # return 1 00:26:45.890 04:22:47 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:26:45.890 04:22:47 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:45.890 04:22:47 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:26:45.890 04:22:47 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:26:45.890 04:22:47 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:26:45.890 04:22:47 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:26:45.890 No valid GPT data, bailing 00:26:45.890 04:22:47 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:26:45.890 04:22:47 -- scripts/common.sh@393 -- # pt= 00:26:45.890 04:22:47 -- scripts/common.sh@394 -- # return 1 00:26:45.890 04:22:47 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:26:45.890 04:22:47 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:45.890 04:22:47 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:26:45.890 04:22:47 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:26:45.890 04:22:47 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:26:45.890 04:22:47 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:26:45.890 No valid GPT data, bailing 00:26:45.890 04:22:47 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:26:45.890 04:22:47 -- scripts/common.sh@393 -- # pt= 00:26:45.890 04:22:47 -- scripts/common.sh@394 -- # return 1 00:26:45.890 04:22:47 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:26:45.890 04:22:47 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:26:45.890 04:22:47 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:45.890 04:22:47 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:45.890 04:22:47 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:45.890 04:22:47 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:26:45.890 04:22:47 -- nvmf/common.sh@654 -- # echo 1 00:26:45.890 04:22:47 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:26:45.890 04:22:47 -- nvmf/common.sh@656 -- # echo 1 00:26:45.890 04:22:47 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:26:45.890 04:22:47 -- nvmf/common.sh@663 -- # echo tcp 00:26:45.890 04:22:47 -- nvmf/common.sh@664 -- # echo 4420 00:26:45.890 04:22:47 -- nvmf/common.sh@665 -- # echo ipv4 00:26:45.890 04:22:47 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:45.890 04:22:47 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:06ec455a-b4fb-4b84-9639-fe47bb8d4157 --hostid=06ec455a-b4fb-4b84-9639-fe47bb8d4157 -a 10.0.0.1 -t tcp -s 4420 00:26:46.149 00:26:46.149 Discovery Log Number of Records 2, Generation counter 2 00:26:46.149 =====Discovery Log Entry 0====== 00:26:46.149 trtype: tcp 00:26:46.149 adrfam: ipv4 00:26:46.149 subtype: current discovery subsystem 00:26:46.149 treq: not specified, sq flow control disable supported 00:26:46.149 portid: 1 00:26:46.149 trsvcid: 4420 00:26:46.149 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:46.149 traddr: 10.0.0.1 00:26:46.149 eflags: none 00:26:46.149 sectype: none 00:26:46.149 =====Discovery Log Entry 1====== 00:26:46.149 trtype: tcp 00:26:46.149 adrfam: ipv4 00:26:46.149 subtype: nvme subsystem 00:26:46.149 treq: not specified, sq flow control disable supported 00:26:46.149 portid: 1 00:26:46.149 trsvcid: 4420 00:26:46.149 subnqn: kernel_target 00:26:46.149 traddr: 10.0.0.1 00:26:46.149 eflags: none 00:26:46.149 sectype: none 00:26:46.149 04:22:47 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:26:46.149 04:22:47 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:46.149 04:22:47 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:46.149 04:22:47 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:46.149 04:22:47 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:46.149 04:22:47 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:26:46.149 04:22:47 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:46.149 04:22:47 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:46.149 04:22:47 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:46.149 04:22:47 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:46.149 04:22:47 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:46.149 04:22:47 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:46.149 04:22:47 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:46.149 04:22:47 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:46.149 04:22:47 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:46.149 04:22:47 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:46.149 04:22:47 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:46.149 04:22:47 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:46.149 04:22:47 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:46.149 04:22:47 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:46.149 04:22:47 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:49.437 Initializing NVMe Controllers 00:26:49.437 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:49.437 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:49.437 Initialization complete. Launching workers. 00:26:49.437 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 34589, failed: 0 00:26:49.437 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 34589, failed to submit 0 00:26:49.437 success 0, unsuccess 34589, failed 0 00:26:49.437 04:22:50 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:49.437 04:22:50 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:52.723 Initializing NVMe Controllers 00:26:52.723 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:52.723 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:52.723 Initialization complete. Launching workers. 00:26:52.723 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 85687, failed: 0 00:26:52.723 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 37342, failed to submit 48345 00:26:52.723 success 0, unsuccess 37342, failed 0 00:26:52.723 04:22:54 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:52.723 04:22:54 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:56.070 Initializing NVMe Controllers 00:26:56.070 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:56.070 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:56.070 Initialization complete. Launching workers. 00:26:56.070 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 104752, failed: 0 00:26:56.070 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 26178, failed to submit 78574 00:26:56.070 success 0, unsuccess 26178, failed 0 00:26:56.070 04:22:57 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:26:56.070 04:22:57 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:26:56.070 04:22:57 -- nvmf/common.sh@677 -- # echo 0 00:26:56.070 04:22:57 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:26:56.070 04:22:57 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:56.070 04:22:57 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:56.070 04:22:57 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:56.070 04:22:57 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:26:56.070 04:22:57 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:26:56.070 ************************************ 00:26:56.070 END TEST kernel_target_abort 00:26:56.070 ************************************ 00:26:56.070 00:26:56.070 real 0m10.549s 00:26:56.070 user 0m5.745s 00:26:56.070 sys 0m2.067s 00:26:56.070 04:22:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:56.070 04:22:57 -- common/autotest_common.sh@10 -- # set +x 00:26:56.070 04:22:57 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:26:56.070 04:22:57 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:26:56.070 04:22:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:56.070 04:22:57 -- nvmf/common.sh@116 -- # sync 00:26:56.070 04:22:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:56.070 04:22:57 -- nvmf/common.sh@119 -- # set +e 00:26:56.070 04:22:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:56.070 04:22:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:56.070 rmmod nvme_tcp 00:26:56.070 rmmod nvme_fabrics 00:26:56.070 rmmod nvme_keyring 00:26:56.070 04:22:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:56.070 04:22:57 -- nvmf/common.sh@123 -- # set -e 00:26:56.070 04:22:57 -- nvmf/common.sh@124 -- # return 0 00:26:56.070 Process with pid 103547 is not found 00:26:56.070 04:22:57 -- nvmf/common.sh@477 -- # '[' -n 103547 ']' 00:26:56.070 04:22:57 -- nvmf/common.sh@478 -- # killprocess 103547 00:26:56.070 04:22:57 -- common/autotest_common.sh@936 -- # '[' -z 103547 ']' 00:26:56.070 04:22:57 -- common/autotest_common.sh@940 -- # kill -0 103547 00:26:56.070 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (103547) - No such process 00:26:56.070 04:22:57 -- common/autotest_common.sh@963 -- # echo 'Process with pid 103547 is not found' 00:26:56.070 04:22:57 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:56.070 04:22:57 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:56.329 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:56.588 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:56.588 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:56.588 04:22:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:56.588 04:22:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:56.588 04:22:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:56.588 04:22:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:56.588 04:22:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.588 04:22:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:56.588 04:22:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.588 04:22:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:56.588 00:26:56.588 real 0m24.709s 00:26:56.588 user 0m50.014s 00:26:56.588 sys 0m5.184s 00:26:56.588 04:22:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:56.588 ************************************ 00:26:56.588 END TEST nvmf_abort_qd_sizes 00:26:56.588 ************************************ 00:26:56.588 04:22:58 -- common/autotest_common.sh@10 -- # set +x 00:26:56.588 04:22:58 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:26:56.588 04:22:58 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:26:56.588 04:22:58 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:26:56.588 04:22:58 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:26:56.588 04:22:58 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:26:56.588 04:22:58 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:26:56.588 04:22:58 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:26:56.588 04:22:58 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:26:56.588 04:22:58 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:26:56.588 04:22:58 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:56.588 04:22:58 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:56.588 04:22:58 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:26:56.588 04:22:58 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:26:56.588 04:22:58 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:26:56.588 04:22:58 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:26:56.588 04:22:58 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:26:56.588 04:22:58 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:26:56.588 04:22:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:56.588 04:22:58 -- common/autotest_common.sh@10 -- # set +x 00:26:56.588 04:22:58 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:26:56.588 04:22:58 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:26:56.588 04:22:58 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:26:56.588 04:22:58 -- common/autotest_common.sh@10 -- # set +x 00:26:58.493 INFO: APP EXITING 00:26:58.493 INFO: killing all VMs 00:26:58.493 INFO: killing vhost app 00:26:58.493 INFO: EXIT DONE 00:26:59.430 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:59.430 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:59.430 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:59.998 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:59.998 Cleaning 00:26:59.998 Removing: /var/run/dpdk/spdk0/config 00:27:00.257 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:00.257 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:00.257 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:00.257 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:00.257 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:00.257 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:00.257 Removing: /var/run/dpdk/spdk1/config 00:27:00.257 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:00.257 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:00.257 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:00.257 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:00.257 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:00.257 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:00.257 Removing: /var/run/dpdk/spdk2/config 00:27:00.257 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:00.257 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:00.257 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:00.257 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:00.257 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:00.257 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:00.257 Removing: /var/run/dpdk/spdk3/config 00:27:00.257 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:00.257 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:00.257 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:00.257 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:00.257 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:00.257 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:00.257 Removing: /var/run/dpdk/spdk4/config 00:27:00.257 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:00.257 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:00.257 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:00.257 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:00.257 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:00.257 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:00.257 Removing: /dev/shm/nvmf_trace.0 00:27:00.257 Removing: /dev/shm/spdk_tgt_trace.pid67572 00:27:00.257 Removing: /var/run/dpdk/spdk0 00:27:00.257 Removing: /var/run/dpdk/spdk1 00:27:00.257 Removing: /var/run/dpdk/spdk2 00:27:00.257 Removing: /var/run/dpdk/spdk3 00:27:00.257 Removing: /var/run/dpdk/spdk4 00:27:00.257 Removing: /var/run/dpdk/spdk_pid100518 00:27:00.257 Removing: /var/run/dpdk/spdk_pid100723 00:27:00.257 Removing: /var/run/dpdk/spdk_pid101015 00:27:00.257 Removing: /var/run/dpdk/spdk_pid101326 00:27:00.257 Removing: /var/run/dpdk/spdk_pid101871 00:27:00.257 Removing: /var/run/dpdk/spdk_pid101881 00:27:00.257 Removing: /var/run/dpdk/spdk_pid102250 00:27:00.257 Removing: /var/run/dpdk/spdk_pid102415 00:27:00.257 Removing: /var/run/dpdk/spdk_pid102573 00:27:00.257 Removing: /var/run/dpdk/spdk_pid102670 00:27:00.257 Removing: /var/run/dpdk/spdk_pid102825 00:27:00.257 Removing: /var/run/dpdk/spdk_pid102938 00:27:00.257 Removing: /var/run/dpdk/spdk_pid103612 00:27:00.257 Removing: /var/run/dpdk/spdk_pid103648 00:27:00.257 Removing: /var/run/dpdk/spdk_pid103683 00:27:00.257 Removing: /var/run/dpdk/spdk_pid103932 00:27:00.257 Removing: /var/run/dpdk/spdk_pid103963 00:27:00.257 Removing: /var/run/dpdk/spdk_pid103998 00:27:00.257 Removing: /var/run/dpdk/spdk_pid67420 00:27:00.257 Removing: /var/run/dpdk/spdk_pid67572 00:27:00.257 Removing: /var/run/dpdk/spdk_pid67900 00:27:00.257 Removing: /var/run/dpdk/spdk_pid68169 00:27:00.257 Removing: /var/run/dpdk/spdk_pid68352 00:27:00.257 Removing: /var/run/dpdk/spdk_pid68430 00:27:00.257 Removing: /var/run/dpdk/spdk_pid68529 00:27:00.257 Removing: /var/run/dpdk/spdk_pid68631 00:27:00.257 Removing: /var/run/dpdk/spdk_pid68664 00:27:00.257 Removing: /var/run/dpdk/spdk_pid68705 00:27:00.257 Removing: /var/run/dpdk/spdk_pid68768 00:27:00.257 Removing: /var/run/dpdk/spdk_pid68891 00:27:00.257 Removing: /var/run/dpdk/spdk_pid69517 00:27:00.257 Removing: /var/run/dpdk/spdk_pid69581 00:27:00.516 Removing: /var/run/dpdk/spdk_pid69650 00:27:00.516 Removing: /var/run/dpdk/spdk_pid69678 00:27:00.516 Removing: /var/run/dpdk/spdk_pid69763 00:27:00.516 Removing: /var/run/dpdk/spdk_pid69785 00:27:00.516 Removing: /var/run/dpdk/spdk_pid69870 00:27:00.516 Removing: /var/run/dpdk/spdk_pid69898 00:27:00.516 Removing: /var/run/dpdk/spdk_pid69955 00:27:00.516 Removing: /var/run/dpdk/spdk_pid69985 00:27:00.516 Removing: /var/run/dpdk/spdk_pid70031 00:27:00.516 Removing: /var/run/dpdk/spdk_pid70061 00:27:00.516 Removing: /var/run/dpdk/spdk_pid70220 00:27:00.516 Removing: /var/run/dpdk/spdk_pid70250 00:27:00.516 Removing: /var/run/dpdk/spdk_pid70332 00:27:00.516 Removing: /var/run/dpdk/spdk_pid70401 00:27:00.516 Removing: /var/run/dpdk/spdk_pid70433 00:27:00.516 Removing: /var/run/dpdk/spdk_pid70486 00:27:00.517 Removing: /var/run/dpdk/spdk_pid70511 00:27:00.517 Removing: /var/run/dpdk/spdk_pid70540 00:27:00.517 Removing: /var/run/dpdk/spdk_pid70565 00:27:00.517 Removing: /var/run/dpdk/spdk_pid70594 00:27:00.517 Removing: /var/run/dpdk/spdk_pid70619 00:27:00.517 Removing: /var/run/dpdk/spdk_pid70648 00:27:00.517 Removing: /var/run/dpdk/spdk_pid70668 00:27:00.517 Removing: /var/run/dpdk/spdk_pid70702 00:27:00.517 Removing: /var/run/dpdk/spdk_pid70724 00:27:00.517 Removing: /var/run/dpdk/spdk_pid70758 00:27:00.517 Removing: /var/run/dpdk/spdk_pid70778 00:27:00.517 Removing: /var/run/dpdk/spdk_pid70812 00:27:00.517 Removing: /var/run/dpdk/spdk_pid70832 00:27:00.517 Removing: /var/run/dpdk/spdk_pid70867 00:27:00.517 Removing: /var/run/dpdk/spdk_pid70887 00:27:00.517 Removing: /var/run/dpdk/spdk_pid70916 00:27:00.517 Removing: /var/run/dpdk/spdk_pid70941 00:27:00.517 Removing: /var/run/dpdk/spdk_pid70970 00:27:00.517 Removing: /var/run/dpdk/spdk_pid70995 00:27:00.517 Removing: /var/run/dpdk/spdk_pid71024 00:27:00.517 Removing: /var/run/dpdk/spdk_pid71049 00:27:00.517 Removing: /var/run/dpdk/spdk_pid71078 00:27:00.517 Removing: /var/run/dpdk/spdk_pid71092 00:27:00.517 Removing: /var/run/dpdk/spdk_pid71132 00:27:00.517 Removing: /var/run/dpdk/spdk_pid71146 00:27:00.517 Removing: /var/run/dpdk/spdk_pid71186 00:27:00.517 Removing: /var/run/dpdk/spdk_pid71200 00:27:00.517 Removing: /var/run/dpdk/spdk_pid71240 00:27:00.517 Removing: /var/run/dpdk/spdk_pid71254 00:27:00.517 Removing: /var/run/dpdk/spdk_pid71293 00:27:00.517 Removing: /var/run/dpdk/spdk_pid71308 00:27:00.517 Removing: /var/run/dpdk/spdk_pid71343 00:27:00.517 Removing: /var/run/dpdk/spdk_pid71365 00:27:00.517 Removing: /var/run/dpdk/spdk_pid71403 00:27:00.517 Removing: /var/run/dpdk/spdk_pid71425 00:27:00.517 Removing: /var/run/dpdk/spdk_pid71463 00:27:00.517 Removing: /var/run/dpdk/spdk_pid71482 00:27:00.517 Removing: /var/run/dpdk/spdk_pid71517 00:27:00.517 Removing: /var/run/dpdk/spdk_pid71536 00:27:00.517 Removing: /var/run/dpdk/spdk_pid71572 00:27:00.517 Removing: /var/run/dpdk/spdk_pid71649 00:27:00.517 Removing: /var/run/dpdk/spdk_pid71767 00:27:00.517 Removing: /var/run/dpdk/spdk_pid72201 00:27:00.517 Removing: /var/run/dpdk/spdk_pid79187 00:27:00.517 Removing: /var/run/dpdk/spdk_pid79531 00:27:00.517 Removing: /var/run/dpdk/spdk_pid81977 00:27:00.517 Removing: /var/run/dpdk/spdk_pid82356 00:27:00.517 Removing: /var/run/dpdk/spdk_pid82596 00:27:00.517 Removing: /var/run/dpdk/spdk_pid82647 00:27:00.517 Removing: /var/run/dpdk/spdk_pid82955 00:27:00.517 Removing: /var/run/dpdk/spdk_pid83005 00:27:00.517 Removing: /var/run/dpdk/spdk_pid83400 00:27:00.517 Removing: /var/run/dpdk/spdk_pid83928 00:27:00.517 Removing: /var/run/dpdk/spdk_pid84370 00:27:00.517 Removing: /var/run/dpdk/spdk_pid85310 00:27:00.517 Removing: /var/run/dpdk/spdk_pid86301 00:27:00.517 Removing: /var/run/dpdk/spdk_pid86424 00:27:00.517 Removing: /var/run/dpdk/spdk_pid86486 00:27:00.517 Removing: /var/run/dpdk/spdk_pid87977 00:27:00.776 Removing: /var/run/dpdk/spdk_pid88222 00:27:00.776 Removing: /var/run/dpdk/spdk_pid88674 00:27:00.776 Removing: /var/run/dpdk/spdk_pid88786 00:27:00.776 Removing: /var/run/dpdk/spdk_pid88939 00:27:00.776 Removing: /var/run/dpdk/spdk_pid88984 00:27:00.776 Removing: /var/run/dpdk/spdk_pid89030 00:27:00.776 Removing: /var/run/dpdk/spdk_pid89070 00:27:00.776 Removing: /var/run/dpdk/spdk_pid89239 00:27:00.776 Removing: /var/run/dpdk/spdk_pid89390 00:27:00.776 Removing: /var/run/dpdk/spdk_pid89650 00:27:00.776 Removing: /var/run/dpdk/spdk_pid89767 00:27:00.776 Removing: /var/run/dpdk/spdk_pid90190 00:27:00.776 Removing: /var/run/dpdk/spdk_pid90582 00:27:00.776 Removing: /var/run/dpdk/spdk_pid90584 00:27:00.776 Removing: /var/run/dpdk/spdk_pid92837 00:27:00.776 Removing: /var/run/dpdk/spdk_pid93152 00:27:00.776 Removing: /var/run/dpdk/spdk_pid93675 00:27:00.776 Removing: /var/run/dpdk/spdk_pid93678 00:27:00.776 Removing: /var/run/dpdk/spdk_pid94025 00:27:00.776 Removing: /var/run/dpdk/spdk_pid94040 00:27:00.776 Removing: /var/run/dpdk/spdk_pid94060 00:27:00.776 Removing: /var/run/dpdk/spdk_pid94085 00:27:00.776 Removing: /var/run/dpdk/spdk_pid94090 00:27:00.776 Removing: /var/run/dpdk/spdk_pid94241 00:27:00.776 Removing: /var/run/dpdk/spdk_pid94244 00:27:00.776 Removing: /var/run/dpdk/spdk_pid94351 00:27:00.776 Removing: /var/run/dpdk/spdk_pid94354 00:27:00.776 Removing: /var/run/dpdk/spdk_pid94461 00:27:00.776 Removing: /var/run/dpdk/spdk_pid94470 00:27:00.776 Removing: /var/run/dpdk/spdk_pid94936 00:27:00.776 Removing: /var/run/dpdk/spdk_pid94980 00:27:00.776 Removing: /var/run/dpdk/spdk_pid95137 00:27:00.776 Removing: /var/run/dpdk/spdk_pid95258 00:27:00.776 Removing: /var/run/dpdk/spdk_pid95659 00:27:00.776 Removing: /var/run/dpdk/spdk_pid95912 00:27:00.776 Removing: /var/run/dpdk/spdk_pid96411 00:27:00.776 Removing: /var/run/dpdk/spdk_pid96975 00:27:00.776 Removing: /var/run/dpdk/spdk_pid97433 00:27:00.776 Removing: /var/run/dpdk/spdk_pid97504 00:27:00.776 Removing: /var/run/dpdk/spdk_pid97594 00:27:00.776 Removing: /var/run/dpdk/spdk_pid97680 00:27:00.776 Removing: /var/run/dpdk/spdk_pid97824 00:27:00.776 Removing: /var/run/dpdk/spdk_pid97914 00:27:00.776 Removing: /var/run/dpdk/spdk_pid97999 00:27:00.776 Removing: /var/run/dpdk/spdk_pid98095 00:27:00.776 Removing: /var/run/dpdk/spdk_pid98445 00:27:00.776 Removing: /var/run/dpdk/spdk_pid99157 00:27:00.776 Clean 00:27:00.776 killing process with pid 61817 00:27:01.035 killing process with pid 61818 00:27:01.035 04:23:02 -- common/autotest_common.sh@1446 -- # return 0 00:27:01.035 04:23:02 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:27:01.035 04:23:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:01.035 04:23:02 -- common/autotest_common.sh@10 -- # set +x 00:27:01.035 04:23:02 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:27:01.035 04:23:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:01.035 04:23:02 -- common/autotest_common.sh@10 -- # set +x 00:27:01.035 04:23:02 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:01.035 04:23:02 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:01.035 04:23:02 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:01.035 04:23:02 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:27:01.035 04:23:02 -- spdk/autotest.sh@383 -- # hostname 00:27:01.035 04:23:02 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:01.293 geninfo: WARNING: invalid characters removed from testname! 00:27:23.225 04:23:22 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:23.791 04:23:25 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:25.693 04:23:27 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:28.235 04:23:29 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:30.136 04:23:31 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:32.038 04:23:33 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:34.572 04:23:35 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:34.572 04:23:35 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:27:34.572 04:23:35 -- common/autotest_common.sh@1690 -- $ lcov --version 00:27:34.572 04:23:35 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:27:34.572 04:23:36 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:27:34.572 04:23:36 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:27:34.572 04:23:36 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:27:34.572 04:23:36 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:27:34.572 04:23:36 -- scripts/common.sh@335 -- $ IFS=.-: 00:27:34.572 04:23:36 -- scripts/common.sh@335 -- $ read -ra ver1 00:27:34.572 04:23:36 -- scripts/common.sh@336 -- $ IFS=.-: 00:27:34.572 04:23:36 -- scripts/common.sh@336 -- $ read -ra ver2 00:27:34.572 04:23:36 -- scripts/common.sh@337 -- $ local 'op=<' 00:27:34.572 04:23:36 -- scripts/common.sh@339 -- $ ver1_l=2 00:27:34.572 04:23:36 -- scripts/common.sh@340 -- $ ver2_l=1 00:27:34.572 04:23:36 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:27:34.572 04:23:36 -- scripts/common.sh@343 -- $ case "$op" in 00:27:34.572 04:23:36 -- scripts/common.sh@344 -- $ : 1 00:27:34.572 04:23:36 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:27:34.572 04:23:36 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:34.572 04:23:36 -- scripts/common.sh@364 -- $ decimal 1 00:27:34.572 04:23:36 -- scripts/common.sh@352 -- $ local d=1 00:27:34.572 04:23:36 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:27:34.572 04:23:36 -- scripts/common.sh@354 -- $ echo 1 00:27:34.572 04:23:36 -- scripts/common.sh@364 -- $ ver1[v]=1 00:27:34.572 04:23:36 -- scripts/common.sh@365 -- $ decimal 2 00:27:34.572 04:23:36 -- scripts/common.sh@352 -- $ local d=2 00:27:34.572 04:23:36 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:27:34.572 04:23:36 -- scripts/common.sh@354 -- $ echo 2 00:27:34.572 04:23:36 -- scripts/common.sh@365 -- $ ver2[v]=2 00:27:34.572 04:23:36 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:27:34.572 04:23:36 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:27:34.572 04:23:36 -- scripts/common.sh@367 -- $ return 0 00:27:34.572 04:23:36 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:34.572 04:23:36 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:27:34.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:34.572 --rc genhtml_branch_coverage=1 00:27:34.572 --rc genhtml_function_coverage=1 00:27:34.572 --rc genhtml_legend=1 00:27:34.572 --rc geninfo_all_blocks=1 00:27:34.572 --rc geninfo_unexecuted_blocks=1 00:27:34.572 00:27:34.572 ' 00:27:34.572 04:23:36 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:27:34.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:34.572 --rc genhtml_branch_coverage=1 00:27:34.572 --rc genhtml_function_coverage=1 00:27:34.572 --rc genhtml_legend=1 00:27:34.572 --rc geninfo_all_blocks=1 00:27:34.572 --rc geninfo_unexecuted_blocks=1 00:27:34.572 00:27:34.572 ' 00:27:34.572 04:23:36 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:27:34.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:34.572 --rc genhtml_branch_coverage=1 00:27:34.572 --rc genhtml_function_coverage=1 00:27:34.572 --rc genhtml_legend=1 00:27:34.572 --rc geninfo_all_blocks=1 00:27:34.572 --rc geninfo_unexecuted_blocks=1 00:27:34.572 00:27:34.572 ' 00:27:34.572 04:23:36 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:27:34.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:34.572 --rc genhtml_branch_coverage=1 00:27:34.572 --rc genhtml_function_coverage=1 00:27:34.572 --rc genhtml_legend=1 00:27:34.572 --rc geninfo_all_blocks=1 00:27:34.572 --rc geninfo_unexecuted_blocks=1 00:27:34.572 00:27:34.572 ' 00:27:34.572 04:23:36 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:34.572 04:23:36 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:34.572 04:23:36 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:34.572 04:23:36 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:34.572 04:23:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.573 04:23:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.573 04:23:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.573 04:23:36 -- paths/export.sh@5 -- $ export PATH 00:27:34.573 04:23:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.573 04:23:36 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:34.573 04:23:36 -- common/autobuild_common.sh@440 -- $ date +%s 00:27:34.573 04:23:36 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732595016.XXXXXX 00:27:34.573 04:23:36 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732595016.Pw3i3Q 00:27:34.573 04:23:36 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:27:34.573 04:23:36 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:27:34.573 04:23:36 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:27:34.573 04:23:36 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:27:34.573 04:23:36 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:34.573 04:23:36 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:34.573 04:23:36 -- common/autobuild_common.sh@456 -- $ get_config_params 00:27:34.573 04:23:36 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:27:34.573 04:23:36 -- common/autotest_common.sh@10 -- $ set +x 00:27:34.573 04:23:36 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:27:34.573 04:23:36 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:34.573 04:23:36 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:34.573 04:23:36 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:34.573 04:23:36 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:34.573 04:23:36 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:34.573 04:23:36 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:34.573 04:23:36 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:34.573 04:23:36 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:34.573 04:23:36 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:34.573 04:23:36 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:34.573 + [[ -n 5977 ]] 00:27:34.573 + sudo kill 5977 00:27:34.582 [Pipeline] } 00:27:34.603 [Pipeline] // timeout 00:27:34.608 [Pipeline] } 00:27:34.625 [Pipeline] // stage 00:27:34.632 [Pipeline] } 00:27:34.649 [Pipeline] // catchError 00:27:34.659 [Pipeline] stage 00:27:34.661 [Pipeline] { (Stop VM) 00:27:34.677 [Pipeline] sh 00:27:34.964 + vagrant halt 00:27:38.252 ==> default: Halting domain... 00:27:44.827 [Pipeline] sh 00:27:45.108 + vagrant destroy -f 00:27:48.399 ==> default: Removing domain... 00:27:48.419 [Pipeline] sh 00:27:48.775 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:27:48.785 [Pipeline] } 00:27:48.800 [Pipeline] // stage 00:27:48.805 [Pipeline] } 00:27:48.820 [Pipeline] // dir 00:27:48.826 [Pipeline] } 00:27:48.841 [Pipeline] // wrap 00:27:48.847 [Pipeline] } 00:27:48.859 [Pipeline] // catchError 00:27:48.869 [Pipeline] stage 00:27:48.871 [Pipeline] { (Epilogue) 00:27:48.883 [Pipeline] sh 00:27:49.166 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:54.449 [Pipeline] catchError 00:27:54.451 [Pipeline] { 00:27:54.464 [Pipeline] sh 00:27:54.746 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:55.004 Artifacts sizes are good 00:27:55.013 [Pipeline] } 00:27:55.028 [Pipeline] // catchError 00:27:55.039 [Pipeline] archiveArtifacts 00:27:55.046 Archiving artifacts 00:27:55.164 [Pipeline] cleanWs 00:27:55.176 [WS-CLEANUP] Deleting project workspace... 00:27:55.176 [WS-CLEANUP] Deferred wipeout is used... 00:27:55.182 [WS-CLEANUP] done 00:27:55.184 [Pipeline] } 00:27:55.197 [Pipeline] // stage 00:27:55.203 [Pipeline] } 00:27:55.217 [Pipeline] // node 00:27:55.222 [Pipeline] End of Pipeline 00:27:55.265 Finished: SUCCESS